00:00:00.001 Started by upstream project "autotest-per-patch" build number 132133 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.053 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.054 The recommended git tool is: git 00:00:00.054 using credential 00000000-0000-0000-0000-000000000002 00:00:00.056 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.085 Fetching changes from the remote Git repository 00:00:00.087 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.132 Using shallow fetch with depth 1 00:00:00.132 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.133 > git --version # timeout=10 00:00:00.182 > git --version # 'git version 2.39.2' 00:00:00.182 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.216 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.216 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.909 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.922 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.935 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:05.935 > git config core.sparsecheckout # timeout=10 00:00:05.947 > git read-tree -mu HEAD # timeout=10 00:00:05.965 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:05.985 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:05.985 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:06.071 [Pipeline] Start of Pipeline 00:00:06.081 [Pipeline] library 00:00:06.082 Loading library shm_lib@master 00:00:06.082 Library shm_lib@master is cached. Copying from home. 00:00:06.098 [Pipeline] node 00:00:06.120 Running on WFP21 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:06.121 [Pipeline] { 00:00:06.132 [Pipeline] catchError 00:00:06.134 [Pipeline] { 00:00:06.146 [Pipeline] wrap 00:00:06.154 [Pipeline] { 00:00:06.162 [Pipeline] stage 00:00:06.164 [Pipeline] { (Prologue) 00:00:06.361 [Pipeline] sh 00:00:06.644 + logger -p user.info -t JENKINS-CI 00:00:06.658 [Pipeline] echo 00:00:06.660 Node: WFP21 00:00:06.666 [Pipeline] sh 00:00:06.957 [Pipeline] setCustomBuildProperty 00:00:06.967 [Pipeline] echo 00:00:06.969 Cleanup processes 00:00:06.975 [Pipeline] sh 00:00:07.258 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:07.258 3513015 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:07.268 [Pipeline] sh 00:00:07.548 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:07.548 ++ grep -v 'sudo pgrep' 00:00:07.548 ++ awk '{print $1}' 00:00:07.548 + sudo kill -9 00:00:07.548 + true 00:00:07.565 [Pipeline] cleanWs 00:00:07.577 [WS-CLEANUP] Deleting project workspace... 00:00:07.577 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.583 [WS-CLEANUP] done 00:00:07.586 [Pipeline] setCustomBuildProperty 00:00:07.597 [Pipeline] sh 00:00:07.877 + sudo git config --global --replace-all safe.directory '*' 00:00:07.964 [Pipeline] httpRequest 00:00:09.151 [Pipeline] echo 00:00:09.153 Sorcerer 10.211.164.101 is alive 00:00:09.161 [Pipeline] retry 00:00:09.162 [Pipeline] { 00:00:09.173 [Pipeline] httpRequest 00:00:09.176 HttpMethod: GET 00:00:09.177 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:09.177 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:09.195 Response Code: HTTP/1.1 200 OK 00:00:09.195 Success: Status code 200 is in the accepted range: 200,404 00:00:09.195 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:27.511 [Pipeline] } 00:00:27.528 [Pipeline] // retry 00:00:27.536 [Pipeline] sh 00:00:27.820 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:27.835 [Pipeline] httpRequest 00:00:28.848 [Pipeline] echo 00:00:28.850 Sorcerer 10.211.164.101 is alive 00:00:28.861 [Pipeline] retry 00:00:28.864 [Pipeline] { 00:00:28.879 [Pipeline] httpRequest 00:00:28.883 HttpMethod: GET 00:00:28.884 URL: http://10.211.164.101/packages/spdk_899af6c35556773d93494c6a94d023acd5b69645.tar.gz 00:00:28.884 Sending request to url: http://10.211.164.101/packages/spdk_899af6c35556773d93494c6a94d023acd5b69645.tar.gz 00:00:28.903 Response Code: HTTP/1.1 200 OK 00:00:28.903 Success: Status code 200 is in the accepted range: 200,404 00:00:28.904 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_899af6c35556773d93494c6a94d023acd5b69645.tar.gz 00:01:08.201 [Pipeline] } 00:01:08.218 [Pipeline] // retry 00:01:08.226 [Pipeline] sh 00:01:08.511 + tar --no-same-owner -xf spdk_899af6c35556773d93494c6a94d023acd5b69645.tar.gz 00:01:11.058 [Pipeline] sh 00:01:11.342 + git -C spdk log --oneline -n5 00:01:11.342 899af6c35 lib/nvme: destruct controllers that failed init asynchronously 00:01:11.342 d1c46ed8e lib/rdma_provider: Add API to check if accel seq supported 00:01:11.342 a59d7e018 lib/mlx5: Add API to check if UMR registration supported 00:01:11.342 f6925f5e4 accel/mlx5: Merge crypto+copy to reg UMR 00:01:11.342 008a6371b accel/mlx5: Initial implementation of mlx5 platform driver 00:01:11.398 [Pipeline] } 00:01:11.413 [Pipeline] // stage 00:01:11.423 [Pipeline] stage 00:01:11.426 [Pipeline] { (Prepare) 00:01:11.443 [Pipeline] writeFile 00:01:11.459 [Pipeline] sh 00:01:11.744 + logger -p user.info -t JENKINS-CI 00:01:11.756 [Pipeline] sh 00:01:12.040 + logger -p user.info -t JENKINS-CI 00:01:12.052 [Pipeline] sh 00:01:12.335 + cat autorun-spdk.conf 00:01:12.335 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.335 SPDK_TEST_NVMF=1 00:01:12.335 SPDK_TEST_NVME_CLI=1 00:01:12.335 SPDK_TEST_NVMF_NICS=mlx5 00:01:12.335 SPDK_RUN_UBSAN=1 00:01:12.335 NET_TYPE=phy 00:01:12.343 RUN_NIGHTLY=0 00:01:12.348 [Pipeline] readFile 00:01:12.371 [Pipeline] withEnv 00:01:12.374 [Pipeline] { 00:01:12.386 [Pipeline] sh 00:01:12.670 + set -ex 00:01:12.670 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:01:12.670 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:12.670 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.670 ++ SPDK_TEST_NVMF=1 00:01:12.670 ++ SPDK_TEST_NVME_CLI=1 00:01:12.670 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:12.670 ++ SPDK_RUN_UBSAN=1 00:01:12.670 ++ NET_TYPE=phy 00:01:12.670 ++ RUN_NIGHTLY=0 00:01:12.670 + case $SPDK_TEST_NVMF_NICS in 00:01:12.670 + DRIVERS=mlx5_ib 00:01:12.670 + [[ -n mlx5_ib ]] 00:01:12.670 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:12.670 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:19.240 rmmod: ERROR: Module irdma is not currently loaded 00:01:19.240 rmmod: ERROR: Module i40iw is not currently loaded 00:01:19.240 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:19.240 + true 00:01:19.240 + for D in $DRIVERS 00:01:19.240 + sudo modprobe mlx5_ib 00:01:19.241 + exit 0 00:01:19.250 [Pipeline] } 00:01:19.265 [Pipeline] // withEnv 00:01:19.270 [Pipeline] } 00:01:19.284 [Pipeline] // stage 00:01:19.294 [Pipeline] catchError 00:01:19.295 [Pipeline] { 00:01:19.310 [Pipeline] timeout 00:01:19.310 Timeout set to expire in 1 hr 0 min 00:01:19.312 [Pipeline] { 00:01:19.326 [Pipeline] stage 00:01:19.328 [Pipeline] { (Tests) 00:01:19.342 [Pipeline] sh 00:01:19.627 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:01:19.627 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:01:19.627 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:01:19.627 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:01:19.627 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:19.627 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:01:19.627 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:01:19.627 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:19.627 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:01:19.627 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:19.627 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:01:19.627 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:01:19.627 + source /etc/os-release 00:01:19.627 ++ NAME='Fedora Linux' 00:01:19.627 ++ VERSION='39 (Cloud Edition)' 00:01:19.627 ++ ID=fedora 00:01:19.627 ++ VERSION_ID=39 00:01:19.627 ++ VERSION_CODENAME= 00:01:19.627 ++ PLATFORM_ID=platform:f39 00:01:19.627 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:19.627 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:19.627 ++ LOGO=fedora-logo-icon 00:01:19.627 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:19.628 ++ HOME_URL=https://fedoraproject.org/ 00:01:19.628 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:19.628 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:19.628 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:19.628 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:19.628 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:19.628 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:19.628 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:19.628 ++ SUPPORT_END=2024-11-12 00:01:19.628 ++ VARIANT='Cloud Edition' 00:01:19.628 ++ VARIANT_ID=cloud 00:01:19.628 + uname -a 00:01:19.628 Linux spdk-wfp-21 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:19.628 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:01:22.196 Hugepages 00:01:22.196 node hugesize free / total 00:01:22.196 node0 1048576kB 0 / 0 00:01:22.196 node0 2048kB 0 / 0 00:01:22.196 node1 1048576kB 0 / 0 00:01:22.196 node1 2048kB 0 / 0 00:01:22.196 00:01:22.196 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:22.196 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:22.196 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:22.196 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:22.196 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:22.196 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:22.196 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:22.196 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:22.196 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:22.196 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:22.196 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:22.196 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:22.196 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:22.196 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:22.196 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:22.196 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:22.196 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:22.456 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:22.456 + rm -f /tmp/spdk-ld-path 00:01:22.456 + source autorun-spdk.conf 00:01:22.456 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.456 ++ SPDK_TEST_NVMF=1 00:01:22.456 ++ SPDK_TEST_NVME_CLI=1 00:01:22.456 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:22.456 ++ SPDK_RUN_UBSAN=1 00:01:22.456 ++ NET_TYPE=phy 00:01:22.456 ++ RUN_NIGHTLY=0 00:01:22.456 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:22.456 + [[ -n '' ]] 00:01:22.456 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:22.456 + for M in /var/spdk/build-*-manifest.txt 00:01:22.456 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:22.456 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:22.456 + for M in /var/spdk/build-*-manifest.txt 00:01:22.456 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:22.456 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:22.456 + for M in /var/spdk/build-*-manifest.txt 00:01:22.456 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:22.456 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:22.456 ++ uname 00:01:22.456 + [[ Linux == \L\i\n\u\x ]] 00:01:22.456 + sudo dmesg -T 00:01:22.456 + sudo dmesg --clear 00:01:22.456 + dmesg_pid=3513937 00:01:22.456 + [[ Fedora Linux == FreeBSD ]] 00:01:22.456 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:22.456 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:22.456 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:22.456 + [[ -x /usr/src/fio-static/fio ]] 00:01:22.456 + export FIO_BIN=/usr/src/fio-static/fio 00:01:22.456 + FIO_BIN=/usr/src/fio-static/fio 00:01:22.456 + sudo dmesg -Tw 00:01:22.456 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:22.456 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:22.456 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:22.456 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:22.456 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:22.456 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:22.456 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:22.456 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:22.456 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:22.456 10:29:50 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:22.456 10:29:50 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:22.456 10:29:50 -- nvmf-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.456 10:29:50 -- nvmf-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:22.456 10:29:50 -- nvmf-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:22.456 10:29:50 -- nvmf-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_NICS=mlx5 00:01:22.456 10:29:50 -- nvmf-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_RUN_UBSAN=1 00:01:22.456 10:29:50 -- nvmf-phy-autotest/autorun-spdk.conf@6 -- $ NET_TYPE=phy 00:01:22.456 10:29:50 -- nvmf-phy-autotest/autorun-spdk.conf@7 -- $ RUN_NIGHTLY=0 00:01:22.456 10:29:50 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:22.456 10:29:50 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:22.717 10:29:50 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:22.717 10:29:50 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:01:22.717 10:29:50 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:22.717 10:29:50 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:22.717 10:29:50 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:22.717 10:29:50 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:22.717 10:29:50 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.717 10:29:50 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.717 10:29:50 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.717 10:29:50 -- paths/export.sh@5 -- $ export PATH 00:01:22.717 10:29:50 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.717 10:29:50 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:01:22.717 10:29:50 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:22.717 10:29:50 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730971790.XXXXXX 00:01:22.717 10:29:50 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730971790.AooZpF 00:01:22.717 10:29:50 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:22.717 10:29:50 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:22.717 10:29:50 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:01:22.717 10:29:50 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:22.718 10:29:50 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:22.718 10:29:50 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:22.718 10:29:50 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:22.718 10:29:50 -- common/autotest_common.sh@10 -- $ set +x 00:01:22.718 10:29:50 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:01:22.718 10:29:50 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:22.718 10:29:50 -- pm/common@17 -- $ local monitor 00:01:22.718 10:29:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.718 10:29:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.718 10:29:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.718 10:29:50 -- pm/common@21 -- $ date +%s 00:01:22.718 10:29:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.718 10:29:50 -- pm/common@21 -- $ date +%s 00:01:22.718 10:29:50 -- pm/common@25 -- $ sleep 1 00:01:22.718 10:29:50 -- pm/common@21 -- $ date +%s 00:01:22.718 10:29:50 -- pm/common@21 -- $ date +%s 00:01:22.718 10:29:50 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730971790 00:01:22.718 10:29:50 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730971790 00:01:22.718 10:29:50 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730971790 00:01:22.718 10:29:50 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730971790 00:01:22.718 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730971790_collect-vmstat.pm.log 00:01:22.718 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730971790_collect-cpu-load.pm.log 00:01:22.718 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730971790_collect-cpu-temp.pm.log 00:01:22.718 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730971790_collect-bmc-pm.bmc.pm.log 00:01:23.657 10:29:51 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:23.657 10:29:51 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:23.657 10:29:51 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:23.657 10:29:51 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:23.657 10:29:51 -- spdk/autobuild.sh@16 -- $ date -u 00:01:23.657 Thu Nov 7 09:29:51 AM UTC 2024 00:01:23.657 10:29:51 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:23.657 v25.01-pre-171-g899af6c35 00:01:23.657 10:29:51 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:23.657 10:29:51 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:23.657 10:29:51 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:23.657 10:29:51 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:23.657 10:29:51 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:23.658 10:29:51 -- common/autotest_common.sh@10 -- $ set +x 00:01:23.658 ************************************ 00:01:23.658 START TEST ubsan 00:01:23.658 ************************************ 00:01:23.658 10:29:51 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:01:23.658 using ubsan 00:01:23.658 00:01:23.658 real 0m0.001s 00:01:23.658 user 0m0.000s 00:01:23.658 sys 0m0.000s 00:01:23.658 10:29:51 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:23.658 10:29:51 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:23.658 ************************************ 00:01:23.658 END TEST ubsan 00:01:23.658 ************************************ 00:01:23.917 10:29:51 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:23.917 10:29:51 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:23.917 10:29:51 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:23.917 10:29:51 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:23.917 10:29:51 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:23.917 10:29:51 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:23.917 10:29:51 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:23.917 10:29:51 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:23.917 10:29:51 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:01:23.917 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:01:23.917 Using default DPDK in /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:01:24.177 Using 'verbs' RDMA provider 00:01:37.339 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:52.224 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:52.224 Creating mk/config.mk...done. 00:01:52.224 Creating mk/cc.flags.mk...done. 00:01:52.224 Type 'make' to build. 00:01:52.224 10:30:18 -- spdk/autobuild.sh@70 -- $ run_test make make -j112 00:01:52.224 10:30:18 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:52.224 10:30:18 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:52.224 10:30:18 -- common/autotest_common.sh@10 -- $ set +x 00:01:52.224 ************************************ 00:01:52.224 START TEST make 00:01:52.224 ************************************ 00:01:52.225 10:30:18 make -- common/autotest_common.sh@1127 -- $ make -j112 00:01:52.225 make[1]: Nothing to be done for 'all'. 00:02:00.363 The Meson build system 00:02:00.363 Version: 1.5.0 00:02:00.363 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk 00:02:00.363 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp 00:02:00.363 Build type: native build 00:02:00.363 Program cat found: YES (/usr/bin/cat) 00:02:00.363 Project name: DPDK 00:02:00.363 Project version: 24.03.0 00:02:00.363 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:00.363 C linker for the host machine: cc ld.bfd 2.40-14 00:02:00.363 Host machine cpu family: x86_64 00:02:00.363 Host machine cpu: x86_64 00:02:00.363 Message: ## Building in Developer Mode ## 00:02:00.363 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:00.363 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:00.363 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:00.363 Program python3 found: YES (/usr/bin/python3) 00:02:00.363 Program cat found: YES (/usr/bin/cat) 00:02:00.363 Compiler for C supports arguments -march=native: YES 00:02:00.363 Checking for size of "void *" : 8 00:02:00.363 Checking for size of "void *" : 8 (cached) 00:02:00.363 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:00.363 Library m found: YES 00:02:00.363 Library numa found: YES 00:02:00.363 Has header "numaif.h" : YES 00:02:00.363 Library fdt found: NO 00:02:00.363 Library execinfo found: NO 00:02:00.363 Has header "execinfo.h" : YES 00:02:00.363 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:00.363 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:00.363 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:00.363 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:00.363 Run-time dependency openssl found: YES 3.1.1 00:02:00.363 Run-time dependency libpcap found: YES 1.10.4 00:02:00.363 Has header "pcap.h" with dependency libpcap: YES 00:02:00.363 Compiler for C supports arguments -Wcast-qual: YES 00:02:00.363 Compiler for C supports arguments -Wdeprecated: YES 00:02:00.363 Compiler for C supports arguments -Wformat: YES 00:02:00.363 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:00.363 Compiler for C supports arguments -Wformat-security: NO 00:02:00.363 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:00.363 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:00.363 Compiler for C supports arguments -Wnested-externs: YES 00:02:00.363 Compiler for C supports arguments -Wold-style-definition: YES 00:02:00.363 Compiler for C supports arguments -Wpointer-arith: YES 00:02:00.363 Compiler for C supports arguments -Wsign-compare: YES 00:02:00.363 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:00.363 Compiler for C supports arguments -Wundef: YES 00:02:00.363 Compiler for C supports arguments -Wwrite-strings: YES 00:02:00.363 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:00.363 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:00.363 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:00.363 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:00.363 Program objdump found: YES (/usr/bin/objdump) 00:02:00.363 Compiler for C supports arguments -mavx512f: YES 00:02:00.363 Checking if "AVX512 checking" compiles: YES 00:02:00.363 Fetching value of define "__SSE4_2__" : 1 00:02:00.363 Fetching value of define "__AES__" : 1 00:02:00.363 Fetching value of define "__AVX__" : 1 00:02:00.363 Fetching value of define "__AVX2__" : 1 00:02:00.363 Fetching value of define "__AVX512BW__" : 1 00:02:00.363 Fetching value of define "__AVX512CD__" : 1 00:02:00.363 Fetching value of define "__AVX512DQ__" : 1 00:02:00.363 Fetching value of define "__AVX512F__" : 1 00:02:00.363 Fetching value of define "__AVX512VL__" : 1 00:02:00.363 Fetching value of define "__PCLMUL__" : 1 00:02:00.363 Fetching value of define "__RDRND__" : 1 00:02:00.363 Fetching value of define "__RDSEED__" : 1 00:02:00.363 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:00.363 Fetching value of define "__znver1__" : (undefined) 00:02:00.363 Fetching value of define "__znver2__" : (undefined) 00:02:00.363 Fetching value of define "__znver3__" : (undefined) 00:02:00.363 Fetching value of define "__znver4__" : (undefined) 00:02:00.363 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:00.363 Message: lib/log: Defining dependency "log" 00:02:00.363 Message: lib/kvargs: Defining dependency "kvargs" 00:02:00.363 Message: lib/telemetry: Defining dependency "telemetry" 00:02:00.363 Checking for function "getentropy" : NO 00:02:00.363 Message: lib/eal: Defining dependency "eal" 00:02:00.363 Message: lib/ring: Defining dependency "ring" 00:02:00.363 Message: lib/rcu: Defining dependency "rcu" 00:02:00.363 Message: lib/mempool: Defining dependency "mempool" 00:02:00.363 Message: lib/mbuf: Defining dependency "mbuf" 00:02:00.363 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:00.363 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:00.363 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:00.363 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:00.363 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:00.363 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:00.363 Compiler for C supports arguments -mpclmul: YES 00:02:00.363 Compiler for C supports arguments -maes: YES 00:02:00.364 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:00.364 Compiler for C supports arguments -mavx512bw: YES 00:02:00.364 Compiler for C supports arguments -mavx512dq: YES 00:02:00.364 Compiler for C supports arguments -mavx512vl: YES 00:02:00.364 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:00.364 Compiler for C supports arguments -mavx2: YES 00:02:00.364 Compiler for C supports arguments -mavx: YES 00:02:00.364 Message: lib/net: Defining dependency "net" 00:02:00.364 Message: lib/meter: Defining dependency "meter" 00:02:00.364 Message: lib/ethdev: Defining dependency "ethdev" 00:02:00.364 Message: lib/pci: Defining dependency "pci" 00:02:00.364 Message: lib/cmdline: Defining dependency "cmdline" 00:02:00.364 Message: lib/hash: Defining dependency "hash" 00:02:00.364 Message: lib/timer: Defining dependency "timer" 00:02:00.364 Message: lib/compressdev: Defining dependency "compressdev" 00:02:00.364 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:00.364 Message: lib/dmadev: Defining dependency "dmadev" 00:02:00.364 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:00.364 Message: lib/power: Defining dependency "power" 00:02:00.364 Message: lib/reorder: Defining dependency "reorder" 00:02:00.364 Message: lib/security: Defining dependency "security" 00:02:00.364 Has header "linux/userfaultfd.h" : YES 00:02:00.364 Has header "linux/vduse.h" : YES 00:02:00.364 Message: lib/vhost: Defining dependency "vhost" 00:02:00.364 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:00.364 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:00.364 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:00.364 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:00.364 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:00.364 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:00.364 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:00.364 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:00.364 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:00.364 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:00.364 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:00.364 Configuring doxy-api-html.conf using configuration 00:02:00.364 Configuring doxy-api-man.conf using configuration 00:02:00.364 Program mandb found: YES (/usr/bin/mandb) 00:02:00.364 Program sphinx-build found: NO 00:02:00.364 Configuring rte_build_config.h using configuration 00:02:00.364 Message: 00:02:00.364 ================= 00:02:00.364 Applications Enabled 00:02:00.364 ================= 00:02:00.364 00:02:00.364 apps: 00:02:00.364 00:02:00.364 00:02:00.364 Message: 00:02:00.364 ================= 00:02:00.364 Libraries Enabled 00:02:00.364 ================= 00:02:00.364 00:02:00.364 libs: 00:02:00.364 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:00.364 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:00.364 cryptodev, dmadev, power, reorder, security, vhost, 00:02:00.364 00:02:00.364 Message: 00:02:00.364 =============== 00:02:00.364 Drivers Enabled 00:02:00.364 =============== 00:02:00.364 00:02:00.364 common: 00:02:00.364 00:02:00.364 bus: 00:02:00.364 pci, vdev, 00:02:00.364 mempool: 00:02:00.364 ring, 00:02:00.364 dma: 00:02:00.364 00:02:00.364 net: 00:02:00.364 00:02:00.364 crypto: 00:02:00.364 00:02:00.364 compress: 00:02:00.364 00:02:00.364 vdpa: 00:02:00.364 00:02:00.364 00:02:00.364 Message: 00:02:00.364 ================= 00:02:00.364 Content Skipped 00:02:00.364 ================= 00:02:00.364 00:02:00.364 apps: 00:02:00.364 dumpcap: explicitly disabled via build config 00:02:00.364 graph: explicitly disabled via build config 00:02:00.364 pdump: explicitly disabled via build config 00:02:00.364 proc-info: explicitly disabled via build config 00:02:00.364 test-acl: explicitly disabled via build config 00:02:00.364 test-bbdev: explicitly disabled via build config 00:02:00.364 test-cmdline: explicitly disabled via build config 00:02:00.364 test-compress-perf: explicitly disabled via build config 00:02:00.364 test-crypto-perf: explicitly disabled via build config 00:02:00.364 test-dma-perf: explicitly disabled via build config 00:02:00.364 test-eventdev: explicitly disabled via build config 00:02:00.364 test-fib: explicitly disabled via build config 00:02:00.364 test-flow-perf: explicitly disabled via build config 00:02:00.364 test-gpudev: explicitly disabled via build config 00:02:00.364 test-mldev: explicitly disabled via build config 00:02:00.364 test-pipeline: explicitly disabled via build config 00:02:00.364 test-pmd: explicitly disabled via build config 00:02:00.364 test-regex: explicitly disabled via build config 00:02:00.364 test-sad: explicitly disabled via build config 00:02:00.364 test-security-perf: explicitly disabled via build config 00:02:00.364 00:02:00.364 libs: 00:02:00.364 argparse: explicitly disabled via build config 00:02:00.364 metrics: explicitly disabled via build config 00:02:00.364 acl: explicitly disabled via build config 00:02:00.364 bbdev: explicitly disabled via build config 00:02:00.364 bitratestats: explicitly disabled via build config 00:02:00.364 bpf: explicitly disabled via build config 00:02:00.364 cfgfile: explicitly disabled via build config 00:02:00.364 distributor: explicitly disabled via build config 00:02:00.364 efd: explicitly disabled via build config 00:02:00.364 eventdev: explicitly disabled via build config 00:02:00.364 dispatcher: explicitly disabled via build config 00:02:00.364 gpudev: explicitly disabled via build config 00:02:00.364 gro: explicitly disabled via build config 00:02:00.364 gso: explicitly disabled via build config 00:02:00.364 ip_frag: explicitly disabled via build config 00:02:00.364 jobstats: explicitly disabled via build config 00:02:00.364 latencystats: explicitly disabled via build config 00:02:00.364 lpm: explicitly disabled via build config 00:02:00.364 member: explicitly disabled via build config 00:02:00.364 pcapng: explicitly disabled via build config 00:02:00.364 rawdev: explicitly disabled via build config 00:02:00.364 regexdev: explicitly disabled via build config 00:02:00.364 mldev: explicitly disabled via build config 00:02:00.364 rib: explicitly disabled via build config 00:02:00.364 sched: explicitly disabled via build config 00:02:00.364 stack: explicitly disabled via build config 00:02:00.364 ipsec: explicitly disabled via build config 00:02:00.364 pdcp: explicitly disabled via build config 00:02:00.364 fib: explicitly disabled via build config 00:02:00.364 port: explicitly disabled via build config 00:02:00.364 pdump: explicitly disabled via build config 00:02:00.364 table: explicitly disabled via build config 00:02:00.364 pipeline: explicitly disabled via build config 00:02:00.364 graph: explicitly disabled via build config 00:02:00.364 node: explicitly disabled via build config 00:02:00.364 00:02:00.364 drivers: 00:02:00.364 common/cpt: not in enabled drivers build config 00:02:00.364 common/dpaax: not in enabled drivers build config 00:02:00.364 common/iavf: not in enabled drivers build config 00:02:00.364 common/idpf: not in enabled drivers build config 00:02:00.364 common/ionic: not in enabled drivers build config 00:02:00.364 common/mvep: not in enabled drivers build config 00:02:00.364 common/octeontx: not in enabled drivers build config 00:02:00.364 bus/auxiliary: not in enabled drivers build config 00:02:00.364 bus/cdx: not in enabled drivers build config 00:02:00.364 bus/dpaa: not in enabled drivers build config 00:02:00.364 bus/fslmc: not in enabled drivers build config 00:02:00.364 bus/ifpga: not in enabled drivers build config 00:02:00.364 bus/platform: not in enabled drivers build config 00:02:00.364 bus/uacce: not in enabled drivers build config 00:02:00.364 bus/vmbus: not in enabled drivers build config 00:02:00.364 common/cnxk: not in enabled drivers build config 00:02:00.364 common/mlx5: not in enabled drivers build config 00:02:00.364 common/nfp: not in enabled drivers build config 00:02:00.364 common/nitrox: not in enabled drivers build config 00:02:00.364 common/qat: not in enabled drivers build config 00:02:00.364 common/sfc_efx: not in enabled drivers build config 00:02:00.364 mempool/bucket: not in enabled drivers build config 00:02:00.364 mempool/cnxk: not in enabled drivers build config 00:02:00.364 mempool/dpaa: not in enabled drivers build config 00:02:00.364 mempool/dpaa2: not in enabled drivers build config 00:02:00.364 mempool/octeontx: not in enabled drivers build config 00:02:00.364 mempool/stack: not in enabled drivers build config 00:02:00.364 dma/cnxk: not in enabled drivers build config 00:02:00.364 dma/dpaa: not in enabled drivers build config 00:02:00.364 dma/dpaa2: not in enabled drivers build config 00:02:00.364 dma/hisilicon: not in enabled drivers build config 00:02:00.364 dma/idxd: not in enabled drivers build config 00:02:00.364 dma/ioat: not in enabled drivers build config 00:02:00.364 dma/skeleton: not in enabled drivers build config 00:02:00.364 net/af_packet: not in enabled drivers build config 00:02:00.364 net/af_xdp: not in enabled drivers build config 00:02:00.364 net/ark: not in enabled drivers build config 00:02:00.364 net/atlantic: not in enabled drivers build config 00:02:00.364 net/avp: not in enabled drivers build config 00:02:00.364 net/axgbe: not in enabled drivers build config 00:02:00.364 net/bnx2x: not in enabled drivers build config 00:02:00.364 net/bnxt: not in enabled drivers build config 00:02:00.364 net/bonding: not in enabled drivers build config 00:02:00.364 net/cnxk: not in enabled drivers build config 00:02:00.364 net/cpfl: not in enabled drivers build config 00:02:00.364 net/cxgbe: not in enabled drivers build config 00:02:00.364 net/dpaa: not in enabled drivers build config 00:02:00.365 net/dpaa2: not in enabled drivers build config 00:02:00.365 net/e1000: not in enabled drivers build config 00:02:00.365 net/ena: not in enabled drivers build config 00:02:00.365 net/enetc: not in enabled drivers build config 00:02:00.365 net/enetfec: not in enabled drivers build config 00:02:00.365 net/enic: not in enabled drivers build config 00:02:00.365 net/failsafe: not in enabled drivers build config 00:02:00.365 net/fm10k: not in enabled drivers build config 00:02:00.365 net/gve: not in enabled drivers build config 00:02:00.365 net/hinic: not in enabled drivers build config 00:02:00.365 net/hns3: not in enabled drivers build config 00:02:00.365 net/i40e: not in enabled drivers build config 00:02:00.365 net/iavf: not in enabled drivers build config 00:02:00.365 net/ice: not in enabled drivers build config 00:02:00.365 net/idpf: not in enabled drivers build config 00:02:00.365 net/igc: not in enabled drivers build config 00:02:00.365 net/ionic: not in enabled drivers build config 00:02:00.365 net/ipn3ke: not in enabled drivers build config 00:02:00.365 net/ixgbe: not in enabled drivers build config 00:02:00.365 net/mana: not in enabled drivers build config 00:02:00.365 net/memif: not in enabled drivers build config 00:02:00.365 net/mlx4: not in enabled drivers build config 00:02:00.365 net/mlx5: not in enabled drivers build config 00:02:00.365 net/mvneta: not in enabled drivers build config 00:02:00.365 net/mvpp2: not in enabled drivers build config 00:02:00.365 net/netvsc: not in enabled drivers build config 00:02:00.365 net/nfb: not in enabled drivers build config 00:02:00.365 net/nfp: not in enabled drivers build config 00:02:00.365 net/ngbe: not in enabled drivers build config 00:02:00.365 net/null: not in enabled drivers build config 00:02:00.365 net/octeontx: not in enabled drivers build config 00:02:00.365 net/octeon_ep: not in enabled drivers build config 00:02:00.365 net/pcap: not in enabled drivers build config 00:02:00.365 net/pfe: not in enabled drivers build config 00:02:00.365 net/qede: not in enabled drivers build config 00:02:00.365 net/ring: not in enabled drivers build config 00:02:00.365 net/sfc: not in enabled drivers build config 00:02:00.365 net/softnic: not in enabled drivers build config 00:02:00.365 net/tap: not in enabled drivers build config 00:02:00.365 net/thunderx: not in enabled drivers build config 00:02:00.365 net/txgbe: not in enabled drivers build config 00:02:00.365 net/vdev_netvsc: not in enabled drivers build config 00:02:00.365 net/vhost: not in enabled drivers build config 00:02:00.365 net/virtio: not in enabled drivers build config 00:02:00.365 net/vmxnet3: not in enabled drivers build config 00:02:00.365 raw/*: missing internal dependency, "rawdev" 00:02:00.365 crypto/armv8: not in enabled drivers build config 00:02:00.365 crypto/bcmfs: not in enabled drivers build config 00:02:00.365 crypto/caam_jr: not in enabled drivers build config 00:02:00.365 crypto/ccp: not in enabled drivers build config 00:02:00.365 crypto/cnxk: not in enabled drivers build config 00:02:00.365 crypto/dpaa_sec: not in enabled drivers build config 00:02:00.365 crypto/dpaa2_sec: not in enabled drivers build config 00:02:00.365 crypto/ipsec_mb: not in enabled drivers build config 00:02:00.365 crypto/mlx5: not in enabled drivers build config 00:02:00.365 crypto/mvsam: not in enabled drivers build config 00:02:00.365 crypto/nitrox: not in enabled drivers build config 00:02:00.365 crypto/null: not in enabled drivers build config 00:02:00.365 crypto/octeontx: not in enabled drivers build config 00:02:00.365 crypto/openssl: not in enabled drivers build config 00:02:00.365 crypto/scheduler: not in enabled drivers build config 00:02:00.365 crypto/uadk: not in enabled drivers build config 00:02:00.365 crypto/virtio: not in enabled drivers build config 00:02:00.365 compress/isal: not in enabled drivers build config 00:02:00.365 compress/mlx5: not in enabled drivers build config 00:02:00.365 compress/nitrox: not in enabled drivers build config 00:02:00.365 compress/octeontx: not in enabled drivers build config 00:02:00.365 compress/zlib: not in enabled drivers build config 00:02:00.365 regex/*: missing internal dependency, "regexdev" 00:02:00.365 ml/*: missing internal dependency, "mldev" 00:02:00.365 vdpa/ifc: not in enabled drivers build config 00:02:00.365 vdpa/mlx5: not in enabled drivers build config 00:02:00.365 vdpa/nfp: not in enabled drivers build config 00:02:00.365 vdpa/sfc: not in enabled drivers build config 00:02:00.365 event/*: missing internal dependency, "eventdev" 00:02:00.365 baseband/*: missing internal dependency, "bbdev" 00:02:00.365 gpu/*: missing internal dependency, "gpudev" 00:02:00.365 00:02:00.365 00:02:00.365 Build targets in project: 85 00:02:00.365 00:02:00.365 DPDK 24.03.0 00:02:00.365 00:02:00.365 User defined options 00:02:00.365 buildtype : debug 00:02:00.365 default_library : shared 00:02:00.365 libdir : lib 00:02:00.365 prefix : /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:02:00.365 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:00.365 c_link_args : 00:02:00.365 cpu_instruction_set: native 00:02:00.365 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:02:00.365 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:02:00.365 enable_docs : false 00:02:00.365 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:00.365 enable_kmods : false 00:02:00.365 max_lcores : 128 00:02:00.365 tests : false 00:02:00.365 00:02:00.365 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:00.365 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp' 00:02:00.365 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:00.365 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:00.365 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:00.365 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:00.365 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:00.365 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:00.365 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:00.365 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:00.365 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:00.365 [10/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:00.365 [11/268] Linking static target lib/librte_kvargs.a 00:02:00.365 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:00.365 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:00.365 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:00.365 [15/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:00.365 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:00.365 [17/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:00.365 [18/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:00.365 [19/268] Linking static target lib/librte_log.a 00:02:00.624 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:00.624 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:00.624 [22/268] Linking static target lib/librte_pci.a 00:02:00.624 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:00.624 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:00.624 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:00.624 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:00.624 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:00.624 [28/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:00.624 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:00.624 [30/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:00.624 [31/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:00.624 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:00.624 [33/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:00.624 [34/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:00.884 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:00.884 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:00.884 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:00.884 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:00.884 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:00.884 [40/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:00.884 [41/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:00.884 [42/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:00.884 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:00.884 [44/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:00.884 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:00.884 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:00.884 [47/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:00.884 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:00.884 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:00.884 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:00.884 [51/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:00.884 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:00.884 [53/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:00.884 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:00.884 [55/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:00.884 [56/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:00.884 [57/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:00.885 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:00.885 [59/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:00.885 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:00.885 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:00.885 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:00.885 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:00.885 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:00.885 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:00.885 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:00.885 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:00.885 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:00.885 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:00.885 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:00.885 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:00.885 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:00.885 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:00.885 [74/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:00.885 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:00.885 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:00.885 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:00.885 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:00.885 [79/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.885 [80/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:00.885 [81/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:00.885 [82/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:00.885 [83/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:00.885 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:00.885 [85/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:00.885 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:00.885 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:00.885 [88/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:00.885 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:00.885 [90/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:00.885 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:00.885 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:00.885 [93/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:00.885 [94/268] Linking static target lib/librte_ring.a 00:02:00.885 [95/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:00.885 [96/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:00.885 [97/268] Linking static target lib/librte_telemetry.a 00:02:00.885 [98/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:00.885 [99/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:00.885 [100/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:00.885 [101/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:00.885 [102/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:00.885 [103/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:00.885 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:00.885 [105/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:00.885 [106/268] Linking static target lib/librte_cmdline.a 00:02:00.885 [107/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:01.144 [108/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:01.144 [109/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:01.144 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:01.144 [111/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:01.144 [112/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:01.144 [113/268] Linking static target lib/librte_meter.a 00:02:01.144 [114/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:01.144 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:01.144 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:01.144 [117/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.144 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:01.144 [119/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:01.144 [120/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:01.144 [121/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:01.144 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:01.144 [123/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:01.144 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:01.144 [125/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:01.144 [126/268] Linking static target lib/librte_timer.a 00:02:01.144 [127/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:01.144 [128/268] Linking static target lib/librte_net.a 00:02:01.144 [129/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:01.144 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:01.144 [131/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:01.144 [132/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:01.144 [133/268] Linking static target lib/librte_mempool.a 00:02:01.144 [134/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:01.144 [135/268] Linking static target lib/librte_eal.a 00:02:01.144 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:01.144 [137/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:01.144 [138/268] Linking static target lib/librte_rcu.a 00:02:01.144 [139/268] Linking static target lib/librte_dmadev.a 00:02:01.144 [140/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:01.144 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:01.144 [142/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:01.144 [143/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:01.144 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:01.144 [145/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:01.144 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:01.144 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:01.144 [148/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:01.144 [149/268] Linking static target lib/librte_compressdev.a 00:02:01.144 [150/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:01.144 [151/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:01.144 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:01.144 [153/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:01.144 [154/268] Linking static target lib/librte_mbuf.a 00:02:01.144 [155/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.144 [156/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:01.144 [157/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:01.144 [158/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:01.144 [159/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:01.144 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:01.144 [161/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.144 [162/268] Linking target lib/librte_log.so.24.1 00:02:01.404 [163/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:01.404 [164/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.404 [165/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:01.404 [166/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:01.404 [167/268] Linking static target lib/librte_power.a 00:02:01.404 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:01.404 [169/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:01.404 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:01.404 [171/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:01.404 [172/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:01.404 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:01.404 [174/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:01.404 [175/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:01.404 [176/268] Linking static target lib/librte_reorder.a 00:02:01.404 [177/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:01.404 [178/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:01.404 [179/268] Linking static target lib/librte_hash.a 00:02:01.404 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:01.404 [181/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:01.404 [182/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.404 [183/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:01.404 [184/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:01.404 [185/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:01.404 [186/268] Linking static target lib/librte_security.a 00:02:01.404 [187/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:01.404 [188/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:01.404 [189/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:01.404 [190/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:01.404 [191/268] Linking target lib/librte_kvargs.so.24.1 00:02:01.404 [192/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.404 [193/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:01.404 [194/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.404 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:01.404 [196/268] Linking static target lib/librte_cryptodev.a 00:02:01.404 [197/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:01.404 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:01.664 [199/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.664 [200/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:01.664 [201/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:01.664 [202/268] Linking static target drivers/librte_bus_vdev.a 00:02:01.664 [203/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:01.664 [204/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:01.664 [205/268] Linking target lib/librte_telemetry.so.24.1 00:02:01.664 [206/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:01.664 [207/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:01.665 [208/268] Linking static target drivers/librte_mempool_ring.a 00:02:01.665 [209/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:01.665 [210/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:01.665 [211/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:01.665 [212/268] Linking static target drivers/librte_bus_pci.a 00:02:01.665 [213/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:01.923 [214/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.923 [215/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.923 [216/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.924 [217/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:01.924 [218/268] Linking static target lib/librte_ethdev.a 00:02:01.924 [219/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.924 [220/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.181 [221/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.181 [222/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.181 [223/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:02.181 [224/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.181 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.439 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.439 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.007 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:03.266 [229/268] Linking static target lib/librte_vhost.a 00:02:03.524 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.463 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.158 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.538 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.538 [234/268] Linking target lib/librte_eal.so.24.1 00:02:13.538 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:13.538 [236/268] Linking target lib/librte_timer.so.24.1 00:02:13.538 [237/268] Linking target lib/librte_meter.so.24.1 00:02:13.538 [238/268] Linking target lib/librte_ring.so.24.1 00:02:13.538 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:13.538 [240/268] Linking target lib/librte_pci.so.24.1 00:02:13.538 [241/268] Linking target lib/librte_dmadev.so.24.1 00:02:13.538 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:13.538 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:13.538 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:13.796 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:13.796 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:13.796 [247/268] Linking target lib/librte_rcu.so.24.1 00:02:13.796 [248/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:13.796 [249/268] Linking target lib/librte_mempool.so.24.1 00:02:13.796 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:13.796 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:13.796 [252/268] Linking target lib/librte_mbuf.so.24.1 00:02:13.796 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:14.055 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:14.055 [255/268] Linking target lib/librte_compressdev.so.24.1 00:02:14.055 [256/268] Linking target lib/librte_net.so.24.1 00:02:14.055 [257/268] Linking target lib/librte_reorder.so.24.1 00:02:14.055 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:14.055 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:14.055 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:14.315 [261/268] Linking target lib/librte_cmdline.so.24.1 00:02:14.315 [262/268] Linking target lib/librte_security.so.24.1 00:02:14.315 [263/268] Linking target lib/librte_hash.so.24.1 00:02:14.315 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:14.315 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:14.315 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:14.315 [267/268] Linking target lib/librte_power.so.24.1 00:02:14.573 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:14.573 INFO: autodetecting backend as ninja 00:02:14.573 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp -j 112 00:02:21.146 CC lib/log/log_deprecated.o 00:02:21.146 CC lib/log/log.o 00:02:21.146 CC lib/log/log_flags.o 00:02:21.146 CC lib/ut_mock/mock.o 00:02:21.146 CC lib/ut/ut.o 00:02:21.146 LIB libspdk_ut_mock.a 00:02:21.146 LIB libspdk_log.a 00:02:21.146 LIB libspdk_ut.a 00:02:21.146 SO libspdk_ut_mock.so.6.0 00:02:21.146 SO libspdk_log.so.7.1 00:02:21.146 SO libspdk_ut.so.2.0 00:02:21.146 SYMLINK libspdk_ut_mock.so 00:02:21.146 SYMLINK libspdk_log.so 00:02:21.146 SYMLINK libspdk_ut.so 00:02:21.715 CC lib/dma/dma.o 00:02:21.715 CC lib/ioat/ioat.o 00:02:21.715 CXX lib/trace_parser/trace.o 00:02:21.715 CC lib/util/base64.o 00:02:21.715 CC lib/util/bit_array.o 00:02:21.715 CC lib/util/cpuset.o 00:02:21.715 CC lib/util/crc16.o 00:02:21.715 CC lib/util/crc32.o 00:02:21.715 CC lib/util/crc32c.o 00:02:21.715 CC lib/util/crc32_ieee.o 00:02:21.715 CC lib/util/crc64.o 00:02:21.715 CC lib/util/dif.o 00:02:21.715 CC lib/util/file.o 00:02:21.715 CC lib/util/fd.o 00:02:21.715 CC lib/util/fd_group.o 00:02:21.715 CC lib/util/hexlify.o 00:02:21.715 CC lib/util/iov.o 00:02:21.715 CC lib/util/pipe.o 00:02:21.715 CC lib/util/math.o 00:02:21.715 CC lib/util/strerror_tls.o 00:02:21.715 CC lib/util/net.o 00:02:21.715 CC lib/util/string.o 00:02:21.715 CC lib/util/uuid.o 00:02:21.715 CC lib/util/xor.o 00:02:21.715 CC lib/util/zipf.o 00:02:21.715 CC lib/util/md5.o 00:02:21.715 CC lib/vfio_user/host/vfio_user_pci.o 00:02:21.715 CC lib/vfio_user/host/vfio_user.o 00:02:21.715 LIB libspdk_dma.a 00:02:21.715 SO libspdk_dma.so.5.0 00:02:21.715 LIB libspdk_ioat.a 00:02:21.974 SO libspdk_ioat.so.7.0 00:02:21.974 SYMLINK libspdk_dma.so 00:02:21.974 SYMLINK libspdk_ioat.so 00:02:21.974 LIB libspdk_vfio_user.a 00:02:21.974 SO libspdk_vfio_user.so.5.0 00:02:21.974 LIB libspdk_util.a 00:02:21.974 SYMLINK libspdk_vfio_user.so 00:02:21.974 SO libspdk_util.so.10.1 00:02:22.233 SYMLINK libspdk_util.so 00:02:22.233 LIB libspdk_trace_parser.a 00:02:22.233 SO libspdk_trace_parser.so.6.0 00:02:22.492 SYMLINK libspdk_trace_parser.so 00:02:22.492 CC lib/vmd/vmd.o 00:02:22.492 CC lib/vmd/led.o 00:02:22.492 CC lib/json/json_parse.o 00:02:22.492 CC lib/rdma_utils/rdma_utils.o 00:02:22.492 CC lib/json/json_util.o 00:02:22.492 CC lib/json/json_write.o 00:02:22.492 CC lib/conf/conf.o 00:02:22.492 CC lib/idxd/idxd.o 00:02:22.492 CC lib/idxd/idxd_user.o 00:02:22.492 CC lib/idxd/idxd_kernel.o 00:02:22.492 CC lib/env_dpdk/env.o 00:02:22.492 CC lib/env_dpdk/memory.o 00:02:22.492 CC lib/env_dpdk/pci.o 00:02:22.492 CC lib/env_dpdk/init.o 00:02:22.492 CC lib/env_dpdk/threads.o 00:02:22.492 CC lib/env_dpdk/pci_ioat.o 00:02:22.492 CC lib/env_dpdk/pci_virtio.o 00:02:22.492 CC lib/env_dpdk/pci_vmd.o 00:02:22.492 CC lib/env_dpdk/pci_idxd.o 00:02:22.492 CC lib/env_dpdk/pci_event.o 00:02:22.492 CC lib/env_dpdk/sigbus_handler.o 00:02:22.492 CC lib/env_dpdk/pci_dpdk.o 00:02:22.492 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:22.492 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:22.754 LIB libspdk_conf.a 00:02:22.754 LIB libspdk_rdma_utils.a 00:02:22.754 SO libspdk_conf.so.6.0 00:02:22.754 LIB libspdk_json.a 00:02:22.754 SO libspdk_rdma_utils.so.1.0 00:02:22.754 SO libspdk_json.so.6.0 00:02:23.015 SYMLINK libspdk_conf.so 00:02:23.016 SYMLINK libspdk_rdma_utils.so 00:02:23.016 SYMLINK libspdk_json.so 00:02:23.016 LIB libspdk_idxd.a 00:02:23.016 SO libspdk_idxd.so.12.1 00:02:23.016 LIB libspdk_vmd.a 00:02:23.016 SO libspdk_vmd.so.6.0 00:02:23.275 SYMLINK libspdk_idxd.so 00:02:23.275 SYMLINK libspdk_vmd.so 00:02:23.275 CC lib/rdma_provider/common.o 00:02:23.275 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:23.275 CC lib/jsonrpc/jsonrpc_server.o 00:02:23.275 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:23.275 CC lib/jsonrpc/jsonrpc_client.o 00:02:23.275 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:23.533 LIB libspdk_rdma_provider.a 00:02:23.533 SO libspdk_rdma_provider.so.7.0 00:02:23.533 LIB libspdk_jsonrpc.a 00:02:23.533 SO libspdk_jsonrpc.so.6.0 00:02:23.533 SYMLINK libspdk_rdma_provider.so 00:02:23.533 LIB libspdk_env_dpdk.a 00:02:23.533 SYMLINK libspdk_jsonrpc.so 00:02:23.533 SO libspdk_env_dpdk.so.15.1 00:02:23.791 SYMLINK libspdk_env_dpdk.so 00:02:24.051 CC lib/rpc/rpc.o 00:02:24.051 LIB libspdk_rpc.a 00:02:24.311 SO libspdk_rpc.so.6.0 00:02:24.311 SYMLINK libspdk_rpc.so 00:02:24.570 CC lib/trace/trace.o 00:02:24.570 CC lib/trace/trace_flags.o 00:02:24.570 CC lib/trace/trace_rpc.o 00:02:24.570 CC lib/notify/notify.o 00:02:24.570 CC lib/notify/notify_rpc.o 00:02:24.570 CC lib/keyring/keyring.o 00:02:24.570 CC lib/keyring/keyring_rpc.o 00:02:24.831 LIB libspdk_notify.a 00:02:24.831 LIB libspdk_trace.a 00:02:24.831 LIB libspdk_keyring.a 00:02:24.831 SO libspdk_notify.so.6.0 00:02:24.831 SO libspdk_trace.so.11.0 00:02:24.831 SO libspdk_keyring.so.2.0 00:02:24.831 SYMLINK libspdk_notify.so 00:02:24.831 SYMLINK libspdk_trace.so 00:02:24.831 SYMLINK libspdk_keyring.so 00:02:25.400 CC lib/thread/thread.o 00:02:25.400 CC lib/thread/iobuf.o 00:02:25.400 CC lib/sock/sock.o 00:02:25.400 CC lib/sock/sock_rpc.o 00:02:25.659 LIB libspdk_sock.a 00:02:25.659 SO libspdk_sock.so.10.0 00:02:25.659 SYMLINK libspdk_sock.so 00:02:25.918 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:25.918 CC lib/nvme/nvme_ctrlr.o 00:02:25.918 CC lib/nvme/nvme_fabric.o 00:02:25.918 CC lib/nvme/nvme_ns_cmd.o 00:02:25.918 CC lib/nvme/nvme_ns.o 00:02:25.918 CC lib/nvme/nvme_pcie_common.o 00:02:25.918 CC lib/nvme/nvme_pcie.o 00:02:25.918 CC lib/nvme/nvme_qpair.o 00:02:25.918 CC lib/nvme/nvme.o 00:02:25.918 CC lib/nvme/nvme_quirks.o 00:02:25.919 CC lib/nvme/nvme_transport.o 00:02:25.919 CC lib/nvme/nvme_discovery.o 00:02:25.919 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:25.919 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:25.919 CC lib/nvme/nvme_tcp.o 00:02:25.919 CC lib/nvme/nvme_opal.o 00:02:25.919 CC lib/nvme/nvme_io_msg.o 00:02:25.919 CC lib/nvme/nvme_poll_group.o 00:02:25.919 CC lib/nvme/nvme_zns.o 00:02:25.919 CC lib/nvme/nvme_stubs.o 00:02:25.919 CC lib/nvme/nvme_auth.o 00:02:25.919 CC lib/nvme/nvme_cuse.o 00:02:25.919 CC lib/nvme/nvme_rdma.o 00:02:26.486 LIB libspdk_thread.a 00:02:26.486 SO libspdk_thread.so.11.0 00:02:26.487 SYMLINK libspdk_thread.so 00:02:26.746 CC lib/accel/accel.o 00:02:26.746 CC lib/accel/accel_sw.o 00:02:26.746 CC lib/virtio/virtio.o 00:02:26.746 CC lib/accel/accel_rpc.o 00:02:26.746 CC lib/virtio/virtio_vhost_user.o 00:02:26.746 CC lib/virtio/virtio_vfio_user.o 00:02:26.746 CC lib/virtio/virtio_pci.o 00:02:26.746 CC lib/init/subsystem.o 00:02:26.746 CC lib/init/json_config.o 00:02:26.746 CC lib/init/subsystem_rpc.o 00:02:26.746 CC lib/init/rpc.o 00:02:26.746 CC lib/fsdev/fsdev.o 00:02:26.746 CC lib/fsdev/fsdev_io.o 00:02:26.746 CC lib/blob/request.o 00:02:26.746 CC lib/fsdev/fsdev_rpc.o 00:02:26.746 CC lib/blob/blobstore.o 00:02:26.746 CC lib/blob/zeroes.o 00:02:26.746 CC lib/blob/blob_bs_dev.o 00:02:27.005 LIB libspdk_init.a 00:02:27.005 SO libspdk_init.so.6.0 00:02:27.005 LIB libspdk_virtio.a 00:02:27.005 SYMLINK libspdk_init.so 00:02:27.005 SO libspdk_virtio.so.7.0 00:02:27.005 SYMLINK libspdk_virtio.so 00:02:27.264 LIB libspdk_fsdev.a 00:02:27.264 SO libspdk_fsdev.so.2.0 00:02:27.264 SYMLINK libspdk_fsdev.so 00:02:27.264 CC lib/event/app.o 00:02:27.264 CC lib/event/app_rpc.o 00:02:27.264 CC lib/event/reactor.o 00:02:27.522 CC lib/event/log_rpc.o 00:02:27.522 CC lib/event/scheduler_static.o 00:02:27.522 LIB libspdk_accel.a 00:02:27.522 SO libspdk_accel.so.16.0 00:02:27.522 SYMLINK libspdk_accel.so 00:02:27.781 LIB libspdk_nvme.a 00:02:27.781 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:27.781 LIB libspdk_event.a 00:02:27.781 SO libspdk_event.so.14.0 00:02:27.781 SO libspdk_nvme.so.15.0 00:02:27.781 SYMLINK libspdk_event.so 00:02:28.041 CC lib/bdev/bdev.o 00:02:28.041 CC lib/bdev/bdev_rpc.o 00:02:28.041 CC lib/bdev/bdev_zone.o 00:02:28.041 CC lib/bdev/part.o 00:02:28.041 CC lib/bdev/scsi_nvme.o 00:02:28.041 SYMLINK libspdk_nvme.so 00:02:28.041 LIB libspdk_fuse_dispatcher.a 00:02:28.300 SO libspdk_fuse_dispatcher.so.1.0 00:02:28.300 SYMLINK libspdk_fuse_dispatcher.so 00:02:28.868 LIB libspdk_blob.a 00:02:28.868 SO libspdk_blob.so.11.0 00:02:28.868 SYMLINK libspdk_blob.so 00:02:29.437 CC lib/lvol/lvol.o 00:02:29.437 CC lib/blobfs/blobfs.o 00:02:29.437 CC lib/blobfs/tree.o 00:02:29.696 LIB libspdk_bdev.a 00:02:29.696 SO libspdk_bdev.so.17.0 00:02:29.955 SYMLINK libspdk_bdev.so 00:02:29.955 LIB libspdk_blobfs.a 00:02:29.955 SO libspdk_blobfs.so.10.0 00:02:29.955 LIB libspdk_lvol.a 00:02:29.955 SO libspdk_lvol.so.10.0 00:02:29.955 SYMLINK libspdk_blobfs.so 00:02:29.955 SYMLINK libspdk_lvol.so 00:02:30.215 CC lib/scsi/dev.o 00:02:30.215 CC lib/scsi/lun.o 00:02:30.215 CC lib/scsi/port.o 00:02:30.215 CC lib/scsi/scsi_bdev.o 00:02:30.215 CC lib/scsi/scsi.o 00:02:30.215 CC lib/scsi/scsi_pr.o 00:02:30.215 CC lib/scsi/scsi_rpc.o 00:02:30.215 CC lib/scsi/task.o 00:02:30.215 CC lib/ftl/ftl_core.o 00:02:30.215 CC lib/ftl/ftl_layout.o 00:02:30.215 CC lib/ftl/ftl_init.o 00:02:30.215 CC lib/ftl/ftl_debug.o 00:02:30.215 CC lib/ftl/ftl_l2p_flat.o 00:02:30.215 CC lib/ftl/ftl_io.o 00:02:30.215 CC lib/ftl/ftl_sb.o 00:02:30.215 CC lib/ftl/ftl_l2p.o 00:02:30.215 CC lib/ftl/ftl_nv_cache.o 00:02:30.215 CC lib/ftl/ftl_band.o 00:02:30.215 CC lib/ftl/ftl_band_ops.o 00:02:30.215 CC lib/ftl/ftl_writer.o 00:02:30.215 CC lib/ftl/ftl_rq.o 00:02:30.215 CC lib/ftl/ftl_reloc.o 00:02:30.215 CC lib/ftl/ftl_l2p_cache.o 00:02:30.215 CC lib/ftl/ftl_p2l.o 00:02:30.215 CC lib/ftl/ftl_p2l_log.o 00:02:30.215 CC lib/ftl/mngt/ftl_mngt.o 00:02:30.215 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:30.215 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:30.216 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:30.216 CC lib/nvmf/ctrlr.o 00:02:30.216 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:30.216 CC lib/nvmf/ctrlr_discovery.o 00:02:30.216 CC lib/ublk/ublk.o 00:02:30.216 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:30.216 CC lib/nvmf/ctrlr_bdev.o 00:02:30.216 CC lib/ublk/ublk_rpc.o 00:02:30.216 CC lib/nvmf/subsystem.o 00:02:30.216 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:30.216 CC lib/nvmf/nvmf.o 00:02:30.216 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:30.216 CC lib/nvmf/nvmf_rpc.o 00:02:30.216 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:30.216 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:30.216 CC lib/nvmf/transport.o 00:02:30.216 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:30.216 CC lib/nvmf/tcp.o 00:02:30.216 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:30.216 CC lib/nvmf/mdns_server.o 00:02:30.216 CC lib/nvmf/stubs.o 00:02:30.216 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:30.216 CC lib/ftl/utils/ftl_conf.o 00:02:30.216 CC lib/nvmf/rdma.o 00:02:30.216 CC lib/nvmf/auth.o 00:02:30.216 CC lib/ftl/utils/ftl_md.o 00:02:30.216 CC lib/ftl/utils/ftl_mempool.o 00:02:30.216 CC lib/ftl/utils/ftl_bitmap.o 00:02:30.216 CC lib/nbd/nbd_rpc.o 00:02:30.216 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:30.216 CC lib/nbd/nbd.o 00:02:30.216 CC lib/ftl/utils/ftl_property.o 00:02:30.216 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:30.216 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:30.216 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:30.216 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:30.216 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:30.216 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:30.216 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:30.216 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:30.216 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:30.216 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:30.216 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:30.216 CC lib/ftl/base/ftl_base_dev.o 00:02:30.216 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:30.216 CC lib/ftl/base/ftl_base_bdev.o 00:02:30.216 CC lib/ftl/ftl_trace.o 00:02:30.783 LIB libspdk_scsi.a 00:02:30.783 LIB libspdk_nbd.a 00:02:30.783 SO libspdk_nbd.so.7.0 00:02:30.783 SO libspdk_scsi.so.9.0 00:02:31.043 SYMLINK libspdk_nbd.so 00:02:31.043 SYMLINK libspdk_scsi.so 00:02:31.043 LIB libspdk_ublk.a 00:02:31.043 SO libspdk_ublk.so.3.0 00:02:31.043 SYMLINK libspdk_ublk.so 00:02:31.300 LIB libspdk_ftl.a 00:02:31.300 CC lib/vhost/vhost_rpc.o 00:02:31.300 CC lib/vhost/vhost.o 00:02:31.300 CC lib/vhost/vhost_blk.o 00:02:31.300 CC lib/vhost/vhost_scsi.o 00:02:31.300 CC lib/vhost/rte_vhost_user.o 00:02:31.300 CC lib/iscsi/iscsi.o 00:02:31.300 CC lib/iscsi/conn.o 00:02:31.300 CC lib/iscsi/init_grp.o 00:02:31.300 CC lib/iscsi/param.o 00:02:31.300 CC lib/iscsi/portal_grp.o 00:02:31.300 CC lib/iscsi/iscsi_rpc.o 00:02:31.300 CC lib/iscsi/tgt_node.o 00:02:31.300 CC lib/iscsi/iscsi_subsystem.o 00:02:31.300 CC lib/iscsi/task.o 00:02:31.300 SO libspdk_ftl.so.9.0 00:02:31.557 SYMLINK libspdk_ftl.so 00:02:32.127 LIB libspdk_nvmf.a 00:02:32.127 SO libspdk_nvmf.so.20.0 00:02:32.127 LIB libspdk_vhost.a 00:02:32.127 SO libspdk_vhost.so.8.0 00:02:32.127 SYMLINK libspdk_nvmf.so 00:02:32.127 SYMLINK libspdk_vhost.so 00:02:32.386 LIB libspdk_iscsi.a 00:02:32.386 SO libspdk_iscsi.so.8.0 00:02:32.386 SYMLINK libspdk_iscsi.so 00:02:32.953 CC module/env_dpdk/env_dpdk_rpc.o 00:02:33.212 CC module/sock/posix/posix.o 00:02:33.212 LIB libspdk_env_dpdk_rpc.a 00:02:33.212 CC module/fsdev/aio/fsdev_aio.o 00:02:33.212 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:33.212 CC module/fsdev/aio/linux_aio_mgr.o 00:02:33.212 CC module/blob/bdev/blob_bdev.o 00:02:33.212 CC module/accel/iaa/accel_iaa.o 00:02:33.212 CC module/accel/ioat/accel_ioat.o 00:02:33.212 CC module/keyring/file/keyring_rpc.o 00:02:33.212 CC module/accel/ioat/accel_ioat_rpc.o 00:02:33.212 CC module/accel/iaa/accel_iaa_rpc.o 00:02:33.212 CC module/keyring/file/keyring.o 00:02:33.212 CC module/keyring/linux/keyring.o 00:02:33.212 CC module/accel/error/accel_error_rpc.o 00:02:33.212 CC module/accel/error/accel_error.o 00:02:33.212 CC module/keyring/linux/keyring_rpc.o 00:02:33.212 CC module/accel/dsa/accel_dsa.o 00:02:33.212 CC module/accel/dsa/accel_dsa_rpc.o 00:02:33.212 CC module/scheduler/gscheduler/gscheduler.o 00:02:33.212 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:33.212 SO libspdk_env_dpdk_rpc.so.6.0 00:02:33.212 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:33.212 SYMLINK libspdk_env_dpdk_rpc.so 00:02:33.470 LIB libspdk_keyring_linux.a 00:02:33.470 LIB libspdk_scheduler_gscheduler.a 00:02:33.470 LIB libspdk_keyring_file.a 00:02:33.470 SO libspdk_scheduler_gscheduler.so.4.0 00:02:33.470 LIB libspdk_accel_ioat.a 00:02:33.470 SO libspdk_keyring_linux.so.1.0 00:02:33.470 LIB libspdk_scheduler_dpdk_governor.a 00:02:33.470 LIB libspdk_accel_iaa.a 00:02:33.470 LIB libspdk_accel_error.a 00:02:33.470 SO libspdk_keyring_file.so.2.0 00:02:33.470 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:33.470 LIB libspdk_scheduler_dynamic.a 00:02:33.470 SO libspdk_accel_ioat.so.6.0 00:02:33.470 SYMLINK libspdk_scheduler_gscheduler.so 00:02:33.470 SO libspdk_accel_iaa.so.3.0 00:02:33.470 LIB libspdk_blob_bdev.a 00:02:33.470 SO libspdk_accel_error.so.2.0 00:02:33.470 SYMLINK libspdk_keyring_linux.so 00:02:33.470 SO libspdk_scheduler_dynamic.so.4.0 00:02:33.470 SO libspdk_blob_bdev.so.11.0 00:02:33.470 LIB libspdk_accel_dsa.a 00:02:33.470 SYMLINK libspdk_keyring_file.so 00:02:33.470 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:33.470 SYMLINK libspdk_accel_ioat.so 00:02:33.470 SYMLINK libspdk_accel_iaa.so 00:02:33.470 SYMLINK libspdk_accel_error.so 00:02:33.470 SO libspdk_accel_dsa.so.5.0 00:02:33.470 SYMLINK libspdk_scheduler_dynamic.so 00:02:33.470 SYMLINK libspdk_blob_bdev.so 00:02:33.470 SYMLINK libspdk_accel_dsa.so 00:02:33.729 LIB libspdk_fsdev_aio.a 00:02:33.729 LIB libspdk_sock_posix.a 00:02:33.729 SO libspdk_fsdev_aio.so.1.0 00:02:33.729 SO libspdk_sock_posix.so.6.0 00:02:33.729 SYMLINK libspdk_fsdev_aio.so 00:02:33.988 SYMLINK libspdk_sock_posix.so 00:02:33.988 CC module/bdev/malloc/bdev_malloc.o 00:02:33.988 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:33.988 CC module/bdev/gpt/gpt.o 00:02:33.988 CC module/bdev/gpt/vbdev_gpt.o 00:02:33.988 CC module/bdev/split/vbdev_split.o 00:02:33.988 CC module/bdev/split/vbdev_split_rpc.o 00:02:33.988 CC module/bdev/null/bdev_null.o 00:02:33.988 CC module/bdev/lvol/vbdev_lvol.o 00:02:33.988 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:33.988 CC module/bdev/iscsi/bdev_iscsi.o 00:02:33.988 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:33.988 CC module/bdev/null/bdev_null_rpc.o 00:02:33.988 CC module/bdev/error/vbdev_error_rpc.o 00:02:33.988 CC module/bdev/error/vbdev_error.o 00:02:33.988 CC module/bdev/raid/bdev_raid_rpc.o 00:02:33.988 CC module/bdev/raid/bdev_raid.o 00:02:33.988 CC module/bdev/aio/bdev_aio.o 00:02:33.988 CC module/bdev/raid/bdev_raid_sb.o 00:02:33.988 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:33.988 CC module/bdev/ftl/bdev_ftl.o 00:02:33.988 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:33.988 CC module/bdev/nvme/bdev_nvme.o 00:02:33.988 CC module/bdev/raid/raid1.o 00:02:33.988 CC module/bdev/raid/raid0.o 00:02:33.988 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:33.988 CC module/bdev/aio/bdev_aio_rpc.o 00:02:33.988 CC module/bdev/delay/vbdev_delay.o 00:02:33.988 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:33.988 CC module/bdev/raid/concat.o 00:02:33.988 CC module/bdev/nvme/nvme_rpc.o 00:02:33.988 CC module/bdev/nvme/bdev_mdns_client.o 00:02:33.988 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:33.988 CC module/bdev/nvme/vbdev_opal.o 00:02:33.988 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:33.988 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:33.988 CC module/blobfs/bdev/blobfs_bdev.o 00:02:33.988 CC module/bdev/passthru/vbdev_passthru.o 00:02:33.988 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:33.988 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:33.988 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:33.988 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:33.988 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:34.247 LIB libspdk_blobfs_bdev.a 00:02:34.247 LIB libspdk_bdev_split.a 00:02:34.247 LIB libspdk_bdev_null.a 00:02:34.247 LIB libspdk_bdev_gpt.a 00:02:34.247 SO libspdk_blobfs_bdev.so.6.0 00:02:34.247 SO libspdk_bdev_split.so.6.0 00:02:34.247 LIB libspdk_bdev_error.a 00:02:34.247 SO libspdk_bdev_null.so.6.0 00:02:34.247 SO libspdk_bdev_gpt.so.6.0 00:02:34.247 LIB libspdk_bdev_ftl.a 00:02:34.247 SYMLINK libspdk_blobfs_bdev.so 00:02:34.247 LIB libspdk_bdev_malloc.a 00:02:34.247 SYMLINK libspdk_bdev_split.so 00:02:34.506 SO libspdk_bdev_error.so.6.0 00:02:34.506 LIB libspdk_bdev_zone_block.a 00:02:34.506 LIB libspdk_bdev_passthru.a 00:02:34.506 SYMLINK libspdk_bdev_null.so 00:02:34.506 LIB libspdk_bdev_aio.a 00:02:34.506 SO libspdk_bdev_malloc.so.6.0 00:02:34.506 LIB libspdk_bdev_iscsi.a 00:02:34.506 SO libspdk_bdev_ftl.so.6.0 00:02:34.506 SO libspdk_bdev_zone_block.so.6.0 00:02:34.506 SO libspdk_bdev_aio.so.6.0 00:02:34.506 SO libspdk_bdev_passthru.so.6.0 00:02:34.506 LIB libspdk_bdev_delay.a 00:02:34.506 SYMLINK libspdk_bdev_gpt.so 00:02:34.506 SO libspdk_bdev_iscsi.so.6.0 00:02:34.506 SYMLINK libspdk_bdev_error.so 00:02:34.506 SYMLINK libspdk_bdev_malloc.so 00:02:34.506 SO libspdk_bdev_delay.so.6.0 00:02:34.506 SYMLINK libspdk_bdev_zone_block.so 00:02:34.506 SYMLINK libspdk_bdev_ftl.so 00:02:34.506 SYMLINK libspdk_bdev_aio.so 00:02:34.506 SYMLINK libspdk_bdev_passthru.so 00:02:34.506 SYMLINK libspdk_bdev_iscsi.so 00:02:34.506 LIB libspdk_bdev_lvol.a 00:02:34.506 SYMLINK libspdk_bdev_delay.so 00:02:34.506 LIB libspdk_bdev_virtio.a 00:02:34.506 SO libspdk_bdev_lvol.so.6.0 00:02:34.506 SO libspdk_bdev_virtio.so.6.0 00:02:34.506 SYMLINK libspdk_bdev_lvol.so 00:02:34.766 SYMLINK libspdk_bdev_virtio.so 00:02:34.766 LIB libspdk_bdev_raid.a 00:02:35.027 SO libspdk_bdev_raid.so.6.0 00:02:35.027 SYMLINK libspdk_bdev_raid.so 00:02:35.967 LIB libspdk_bdev_nvme.a 00:02:35.967 SO libspdk_bdev_nvme.so.7.1 00:02:35.967 SYMLINK libspdk_bdev_nvme.so 00:02:36.958 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:36.958 CC module/event/subsystems/vmd/vmd.o 00:02:36.958 CC module/event/subsystems/keyring/keyring.o 00:02:36.958 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:36.958 CC module/event/subsystems/scheduler/scheduler.o 00:02:36.958 CC module/event/subsystems/fsdev/fsdev.o 00:02:36.958 CC module/event/subsystems/iobuf/iobuf.o 00:02:36.958 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:36.958 CC module/event/subsystems/sock/sock.o 00:02:36.958 LIB libspdk_event_vhost_blk.a 00:02:36.958 LIB libspdk_event_keyring.a 00:02:36.958 LIB libspdk_event_vmd.a 00:02:36.959 SO libspdk_event_vhost_blk.so.3.0 00:02:36.959 LIB libspdk_event_fsdev.a 00:02:36.959 LIB libspdk_event_scheduler.a 00:02:36.959 LIB libspdk_event_sock.a 00:02:36.959 SO libspdk_event_keyring.so.1.0 00:02:36.959 LIB libspdk_event_iobuf.a 00:02:36.959 SO libspdk_event_vmd.so.6.0 00:02:36.959 SO libspdk_event_fsdev.so.1.0 00:02:36.959 SO libspdk_event_sock.so.5.0 00:02:36.959 SO libspdk_event_scheduler.so.4.0 00:02:36.959 SO libspdk_event_iobuf.so.3.0 00:02:36.959 SYMLINK libspdk_event_vhost_blk.so 00:02:36.959 SYMLINK libspdk_event_keyring.so 00:02:36.959 SYMLINK libspdk_event_fsdev.so 00:02:36.959 SYMLINK libspdk_event_sock.so 00:02:36.959 SYMLINK libspdk_event_vmd.so 00:02:36.959 SYMLINK libspdk_event_scheduler.so 00:02:36.959 SYMLINK libspdk_event_iobuf.so 00:02:37.527 CC module/event/subsystems/accel/accel.o 00:02:37.527 LIB libspdk_event_accel.a 00:02:37.527 SO libspdk_event_accel.so.6.0 00:02:37.527 SYMLINK libspdk_event_accel.so 00:02:38.093 CC module/event/subsystems/bdev/bdev.o 00:02:38.093 LIB libspdk_event_bdev.a 00:02:38.093 SO libspdk_event_bdev.so.6.0 00:02:38.351 SYMLINK libspdk_event_bdev.so 00:02:38.610 CC module/event/subsystems/nbd/nbd.o 00:02:38.610 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:38.610 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:38.610 CC module/event/subsystems/ublk/ublk.o 00:02:38.610 CC module/event/subsystems/scsi/scsi.o 00:02:38.868 LIB libspdk_event_nbd.a 00:02:38.868 LIB libspdk_event_ublk.a 00:02:38.868 LIB libspdk_event_scsi.a 00:02:38.868 SO libspdk_event_nbd.so.6.0 00:02:38.868 SO libspdk_event_ublk.so.3.0 00:02:38.868 SO libspdk_event_scsi.so.6.0 00:02:38.868 LIB libspdk_event_nvmf.a 00:02:38.868 SO libspdk_event_nvmf.so.6.0 00:02:38.868 SYMLINK libspdk_event_nbd.so 00:02:38.868 SYMLINK libspdk_event_ublk.so 00:02:38.868 SYMLINK libspdk_event_scsi.so 00:02:38.868 SYMLINK libspdk_event_nvmf.so 00:02:39.127 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:39.127 CC module/event/subsystems/iscsi/iscsi.o 00:02:39.386 LIB libspdk_event_vhost_scsi.a 00:02:39.386 LIB libspdk_event_iscsi.a 00:02:39.386 SO libspdk_event_vhost_scsi.so.3.0 00:02:39.386 SO libspdk_event_iscsi.so.6.0 00:02:39.386 SYMLINK libspdk_event_vhost_scsi.so 00:02:39.386 SYMLINK libspdk_event_iscsi.so 00:02:39.645 SO libspdk.so.6.0 00:02:39.645 SYMLINK libspdk.so 00:02:39.902 CC app/spdk_nvme_perf/perf.o 00:02:39.902 CXX app/trace/trace.o 00:02:39.902 CC app/spdk_nvme_discover/discovery_aer.o 00:02:39.902 CC app/spdk_lspci/spdk_lspci.o 00:02:39.902 CC app/spdk_top/spdk_top.o 00:02:39.902 CC app/trace_record/trace_record.o 00:02:39.902 CC app/spdk_nvme_identify/identify.o 00:02:40.166 CC app/spdk_dd/spdk_dd.o 00:02:40.166 CC test/rpc_client/rpc_client_test.o 00:02:40.166 CC app/iscsi_tgt/iscsi_tgt.o 00:02:40.166 TEST_HEADER include/spdk/assert.h 00:02:40.166 TEST_HEADER include/spdk/barrier.h 00:02:40.166 TEST_HEADER include/spdk/accel.h 00:02:40.166 TEST_HEADER include/spdk/accel_module.h 00:02:40.166 TEST_HEADER include/spdk/base64.h 00:02:40.166 TEST_HEADER include/spdk/bdev_module.h 00:02:40.166 TEST_HEADER include/spdk/bdev.h 00:02:40.166 CC app/nvmf_tgt/nvmf_main.o 00:02:40.166 TEST_HEADER include/spdk/bdev_zone.h 00:02:40.166 TEST_HEADER include/spdk/bit_pool.h 00:02:40.166 TEST_HEADER include/spdk/bit_array.h 00:02:40.166 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:40.166 TEST_HEADER include/spdk/blob_bdev.h 00:02:40.166 TEST_HEADER include/spdk/blobfs.h 00:02:40.166 TEST_HEADER include/spdk/conf.h 00:02:40.166 TEST_HEADER include/spdk/blob.h 00:02:40.166 TEST_HEADER include/spdk/config.h 00:02:40.166 TEST_HEADER include/spdk/cpuset.h 00:02:40.166 TEST_HEADER include/spdk/crc16.h 00:02:40.166 TEST_HEADER include/spdk/crc32.h 00:02:40.166 TEST_HEADER include/spdk/crc64.h 00:02:40.166 TEST_HEADER include/spdk/dif.h 00:02:40.166 TEST_HEADER include/spdk/dma.h 00:02:40.166 TEST_HEADER include/spdk/endian.h 00:02:40.166 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:40.166 TEST_HEADER include/spdk/env_dpdk.h 00:02:40.166 TEST_HEADER include/spdk/env.h 00:02:40.166 TEST_HEADER include/spdk/event.h 00:02:40.166 TEST_HEADER include/spdk/fd_group.h 00:02:40.166 TEST_HEADER include/spdk/file.h 00:02:40.166 TEST_HEADER include/spdk/fd.h 00:02:40.166 TEST_HEADER include/spdk/fsdev.h 00:02:40.166 CC app/spdk_tgt/spdk_tgt.o 00:02:40.166 TEST_HEADER include/spdk/ftl.h 00:02:40.166 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:40.166 TEST_HEADER include/spdk/fsdev_module.h 00:02:40.166 TEST_HEADER include/spdk/hexlify.h 00:02:40.166 TEST_HEADER include/spdk/idxd.h 00:02:40.166 TEST_HEADER include/spdk/gpt_spec.h 00:02:40.166 TEST_HEADER include/spdk/histogram_data.h 00:02:40.166 TEST_HEADER include/spdk/init.h 00:02:40.166 TEST_HEADER include/spdk/idxd_spec.h 00:02:40.166 TEST_HEADER include/spdk/iscsi_spec.h 00:02:40.166 TEST_HEADER include/spdk/ioat.h 00:02:40.166 TEST_HEADER include/spdk/json.h 00:02:40.166 TEST_HEADER include/spdk/ioat_spec.h 00:02:40.166 TEST_HEADER include/spdk/jsonrpc.h 00:02:40.166 TEST_HEADER include/spdk/keyring_module.h 00:02:40.166 TEST_HEADER include/spdk/keyring.h 00:02:40.166 TEST_HEADER include/spdk/likely.h 00:02:40.166 TEST_HEADER include/spdk/lvol.h 00:02:40.166 TEST_HEADER include/spdk/log.h 00:02:40.166 TEST_HEADER include/spdk/md5.h 00:02:40.166 TEST_HEADER include/spdk/memory.h 00:02:40.166 TEST_HEADER include/spdk/mmio.h 00:02:40.166 TEST_HEADER include/spdk/nbd.h 00:02:40.166 TEST_HEADER include/spdk/net.h 00:02:40.166 TEST_HEADER include/spdk/notify.h 00:02:40.166 TEST_HEADER include/spdk/nvme.h 00:02:40.166 TEST_HEADER include/spdk/nvme_intel.h 00:02:40.166 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:40.166 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:40.166 TEST_HEADER include/spdk/nvme_spec.h 00:02:40.166 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:40.166 TEST_HEADER include/spdk/nvme_zns.h 00:02:40.166 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:40.166 TEST_HEADER include/spdk/nvmf_spec.h 00:02:40.166 TEST_HEADER include/spdk/nvmf.h 00:02:40.166 TEST_HEADER include/spdk/nvmf_transport.h 00:02:40.166 TEST_HEADER include/spdk/opal_spec.h 00:02:40.166 TEST_HEADER include/spdk/pci_ids.h 00:02:40.166 TEST_HEADER include/spdk/opal.h 00:02:40.166 TEST_HEADER include/spdk/queue.h 00:02:40.166 TEST_HEADER include/spdk/pipe.h 00:02:40.166 TEST_HEADER include/spdk/reduce.h 00:02:40.166 TEST_HEADER include/spdk/scheduler.h 00:02:40.167 TEST_HEADER include/spdk/scsi.h 00:02:40.167 TEST_HEADER include/spdk/rpc.h 00:02:40.167 TEST_HEADER include/spdk/scsi_spec.h 00:02:40.167 TEST_HEADER include/spdk/stdinc.h 00:02:40.167 TEST_HEADER include/spdk/sock.h 00:02:40.167 TEST_HEADER include/spdk/string.h 00:02:40.167 TEST_HEADER include/spdk/thread.h 00:02:40.167 TEST_HEADER include/spdk/trace.h 00:02:40.167 TEST_HEADER include/spdk/tree.h 00:02:40.167 TEST_HEADER include/spdk/ublk.h 00:02:40.167 TEST_HEADER include/spdk/trace_parser.h 00:02:40.167 TEST_HEADER include/spdk/util.h 00:02:40.167 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:40.167 TEST_HEADER include/spdk/uuid.h 00:02:40.167 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:40.167 TEST_HEADER include/spdk/version.h 00:02:40.167 TEST_HEADER include/spdk/vhost.h 00:02:40.167 TEST_HEADER include/spdk/vmd.h 00:02:40.167 TEST_HEADER include/spdk/xor.h 00:02:40.167 CXX test/cpp_headers/accel_module.o 00:02:40.167 TEST_HEADER include/spdk/zipf.h 00:02:40.167 CXX test/cpp_headers/assert.o 00:02:40.167 CXX test/cpp_headers/accel.o 00:02:40.167 CXX test/cpp_headers/barrier.o 00:02:40.167 CXX test/cpp_headers/bdev.o 00:02:40.167 CXX test/cpp_headers/base64.o 00:02:40.167 CXX test/cpp_headers/bdev_zone.o 00:02:40.167 CXX test/cpp_headers/bdev_module.o 00:02:40.167 CXX test/cpp_headers/bit_array.o 00:02:40.167 CXX test/cpp_headers/blobfs_bdev.o 00:02:40.167 CXX test/cpp_headers/blobfs.o 00:02:40.167 CXX test/cpp_headers/bit_pool.o 00:02:40.167 CXX test/cpp_headers/blob_bdev.o 00:02:40.167 CXX test/cpp_headers/conf.o 00:02:40.167 CXX test/cpp_headers/blob.o 00:02:40.167 CXX test/cpp_headers/config.o 00:02:40.167 CXX test/cpp_headers/cpuset.o 00:02:40.167 CXX test/cpp_headers/crc16.o 00:02:40.167 CXX test/cpp_headers/crc32.o 00:02:40.167 CXX test/cpp_headers/dif.o 00:02:40.167 CXX test/cpp_headers/crc64.o 00:02:40.167 CXX test/cpp_headers/env.o 00:02:40.167 CXX test/cpp_headers/dma.o 00:02:40.167 CXX test/cpp_headers/env_dpdk.o 00:02:40.167 CXX test/cpp_headers/endian.o 00:02:40.167 CXX test/cpp_headers/event.o 00:02:40.167 CXX test/cpp_headers/file.o 00:02:40.167 CXX test/cpp_headers/fd_group.o 00:02:40.167 CXX test/cpp_headers/fsdev.o 00:02:40.167 CXX test/cpp_headers/fd.o 00:02:40.167 CXX test/cpp_headers/fsdev_module.o 00:02:40.167 CXX test/cpp_headers/gpt_spec.o 00:02:40.167 CXX test/cpp_headers/ftl.o 00:02:40.167 CXX test/cpp_headers/hexlify.o 00:02:40.167 CXX test/cpp_headers/fuse_dispatcher.o 00:02:40.167 CXX test/cpp_headers/idxd.o 00:02:40.167 CXX test/cpp_headers/idxd_spec.o 00:02:40.167 CXX test/cpp_headers/histogram_data.o 00:02:40.167 CXX test/cpp_headers/init.o 00:02:40.167 CXX test/cpp_headers/ioat.o 00:02:40.167 CXX test/cpp_headers/ioat_spec.o 00:02:40.167 CXX test/cpp_headers/iscsi_spec.o 00:02:40.167 CXX test/cpp_headers/json.o 00:02:40.167 CXX test/cpp_headers/keyring.o 00:02:40.167 CXX test/cpp_headers/jsonrpc.o 00:02:40.167 CXX test/cpp_headers/keyring_module.o 00:02:40.167 CXX test/cpp_headers/likely.o 00:02:40.167 CXX test/cpp_headers/lvol.o 00:02:40.167 CXX test/cpp_headers/md5.o 00:02:40.167 CXX test/cpp_headers/log.o 00:02:40.167 CXX test/cpp_headers/mmio.o 00:02:40.167 CXX test/cpp_headers/memory.o 00:02:40.167 CXX test/cpp_headers/nbd.o 00:02:40.167 CXX test/cpp_headers/net.o 00:02:40.167 CXX test/cpp_headers/nvme_intel.o 00:02:40.167 CXX test/cpp_headers/nvme_ocssd.o 00:02:40.167 CXX test/cpp_headers/notify.o 00:02:40.167 CXX test/cpp_headers/nvme.o 00:02:40.167 CXX test/cpp_headers/nvme_spec.o 00:02:40.167 CXX test/cpp_headers/nvme_zns.o 00:02:40.167 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:40.167 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:40.167 CXX test/cpp_headers/nvmf_cmd.o 00:02:40.167 CXX test/cpp_headers/nvmf.o 00:02:40.167 CXX test/cpp_headers/nvmf_spec.o 00:02:40.167 CXX test/cpp_headers/opal.o 00:02:40.167 CXX test/cpp_headers/nvmf_transport.o 00:02:40.167 CXX test/cpp_headers/opal_spec.o 00:02:40.167 CXX test/cpp_headers/pci_ids.o 00:02:40.167 CXX test/cpp_headers/pipe.o 00:02:40.167 CXX test/cpp_headers/reduce.o 00:02:40.167 CXX test/cpp_headers/queue.o 00:02:40.167 CXX test/cpp_headers/rpc.o 00:02:40.167 CC app/fio/nvme/fio_plugin.o 00:02:40.167 CXX test/cpp_headers/scheduler.o 00:02:40.167 CXX test/cpp_headers/sock.o 00:02:40.167 CXX test/cpp_headers/scsi.o 00:02:40.167 CXX test/cpp_headers/scsi_spec.o 00:02:40.167 CXX test/cpp_headers/stdinc.o 00:02:40.167 CXX test/cpp_headers/string.o 00:02:40.167 CXX test/cpp_headers/thread.o 00:02:40.167 CXX test/cpp_headers/tree.o 00:02:40.167 CXX test/cpp_headers/trace.o 00:02:40.167 CXX test/cpp_headers/trace_parser.o 00:02:40.167 CC test/thread/poller_perf/poller_perf.o 00:02:40.167 LINK spdk_lspci 00:02:40.167 CC examples/util/zipf/zipf.o 00:02:40.167 CC test/env/vtophys/vtophys.o 00:02:40.167 CC examples/ioat/verify/verify.o 00:02:40.167 CC test/env/pci/pci_ut.o 00:02:40.167 CC test/env/memory/memory_ut.o 00:02:40.167 CC examples/ioat/perf/perf.o 00:02:40.167 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:40.443 CC test/app/jsoncat/jsoncat.o 00:02:40.443 CC app/fio/bdev/fio_plugin.o 00:02:40.443 CC test/app/histogram_perf/histogram_perf.o 00:02:40.443 CC test/dma/test_dma/test_dma.o 00:02:40.443 CC test/app/stub/stub.o 00:02:40.443 CC test/app/bdev_svc/bdev_svc.o 00:02:40.443 CXX test/cpp_headers/ublk.o 00:02:40.443 LINK spdk_nvme_discover 00:02:40.443 LINK rpc_client_test 00:02:40.718 LINK nvmf_tgt 00:02:40.718 LINK iscsi_tgt 00:02:40.718 CC test/env/mem_callbacks/mem_callbacks.o 00:02:40.718 LINK spdk_tgt 00:02:40.718 LINK interrupt_tgt 00:02:40.976 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:40.976 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:40.976 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:40.976 LINK spdk_trace_record 00:02:40.976 LINK poller_perf 00:02:40.976 LINK jsoncat 00:02:40.976 LINK histogram_perf 00:02:40.976 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:40.976 LINK vtophys 00:02:40.976 CXX test/cpp_headers/util.o 00:02:40.976 CXX test/cpp_headers/uuid.o 00:02:40.976 CXX test/cpp_headers/version.o 00:02:40.976 LINK zipf 00:02:40.976 CXX test/cpp_headers/vfio_user_pci.o 00:02:40.976 CXX test/cpp_headers/vfio_user_spec.o 00:02:40.976 CXX test/cpp_headers/vhost.o 00:02:40.976 CXX test/cpp_headers/vmd.o 00:02:40.976 CXX test/cpp_headers/xor.o 00:02:40.976 CXX test/cpp_headers/zipf.o 00:02:40.976 LINK env_dpdk_post_init 00:02:40.976 LINK bdev_svc 00:02:40.976 LINK stub 00:02:40.976 LINK ioat_perf 00:02:40.976 LINK verify 00:02:40.976 LINK spdk_dd 00:02:41.234 LINK spdk_trace 00:02:41.234 LINK pci_ut 00:02:41.234 LINK spdk_nvme 00:02:41.234 LINK test_dma 00:02:41.234 LINK nvme_fuzz 00:02:41.492 LINK spdk_nvme_perf 00:02:41.492 LINK spdk_bdev 00:02:41.492 LINK vhost_fuzz 00:02:41.492 LINK spdk_nvme_identify 00:02:41.492 CC test/event/reactor/reactor.o 00:02:41.492 CC test/event/event_perf/event_perf.o 00:02:41.492 CC test/event/reactor_perf/reactor_perf.o 00:02:41.492 CC test/event/app_repeat/app_repeat.o 00:02:41.492 CC examples/sock/hello_world/hello_sock.o 00:02:41.492 LINK spdk_top 00:02:41.492 CC examples/vmd/lsvmd/lsvmd.o 00:02:41.492 CC test/event/scheduler/scheduler.o 00:02:41.492 CC examples/vmd/led/led.o 00:02:41.492 CC examples/idxd/perf/perf.o 00:02:41.492 CC examples/thread/thread/thread_ex.o 00:02:41.492 CC app/vhost/vhost.o 00:02:41.492 LINK mem_callbacks 00:02:41.492 LINK reactor 00:02:41.492 LINK event_perf 00:02:41.492 LINK reactor_perf 00:02:41.750 LINK lsvmd 00:02:41.750 LINK led 00:02:41.750 LINK app_repeat 00:02:41.750 LINK scheduler 00:02:41.750 LINK hello_sock 00:02:41.750 LINK vhost 00:02:41.750 LINK thread 00:02:41.750 LINK idxd_perf 00:02:41.750 CC test/nvme/overhead/overhead.o 00:02:41.750 LINK memory_ut 00:02:41.751 CC test/nvme/startup/startup.o 00:02:41.751 CC test/nvme/cuse/cuse.o 00:02:41.751 CC test/nvme/simple_copy/simple_copy.o 00:02:41.751 CC test/nvme/e2edp/nvme_dp.o 00:02:41.751 CC test/nvme/aer/aer.o 00:02:41.751 CC test/nvme/sgl/sgl.o 00:02:41.751 CC test/nvme/err_injection/err_injection.o 00:02:41.751 CC test/nvme/boot_partition/boot_partition.o 00:02:41.751 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:41.751 CC test/nvme/fused_ordering/fused_ordering.o 00:02:41.751 CC test/nvme/connect_stress/connect_stress.o 00:02:41.751 CC test/nvme/compliance/nvme_compliance.o 00:02:41.751 CC test/nvme/reset/reset.o 00:02:41.751 CC test/nvme/reserve/reserve.o 00:02:41.751 CC test/nvme/fdp/fdp.o 00:02:41.751 CC test/blobfs/mkfs/mkfs.o 00:02:41.751 CC test/accel/dif/dif.o 00:02:42.009 CC test/lvol/esnap/esnap.o 00:02:42.009 LINK startup 00:02:42.009 LINK boot_partition 00:02:42.009 LINK connect_stress 00:02:42.009 LINK doorbell_aers 00:02:42.009 LINK fused_ordering 00:02:42.009 LINK reserve 00:02:42.009 LINK err_injection 00:02:42.009 LINK mkfs 00:02:42.009 LINK overhead 00:02:42.009 LINK simple_copy 00:02:42.009 LINK reset 00:02:42.009 LINK sgl 00:02:42.009 LINK nvme_dp 00:02:42.009 LINK aer 00:02:42.009 LINK fdp 00:02:42.266 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:42.266 CC examples/nvme/hotplug/hotplug.o 00:02:42.266 LINK nvme_compliance 00:02:42.266 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:42.266 CC examples/nvme/arbitration/arbitration.o 00:02:42.266 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:42.266 CC examples/nvme/hello_world/hello_world.o 00:02:42.266 CC examples/nvme/abort/abort.o 00:02:42.266 CC examples/nvme/reconnect/reconnect.o 00:02:42.266 CC examples/accel/perf/accel_perf.o 00:02:42.266 LINK iscsi_fuzz 00:02:42.266 CC examples/blob/hello_world/hello_blob.o 00:02:42.266 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:42.266 CC examples/blob/cli/blobcli.o 00:02:42.266 LINK cmb_copy 00:02:42.266 LINK pmr_persistence 00:02:42.266 LINK hello_world 00:02:42.266 LINK hotplug 00:02:42.525 LINK dif 00:02:42.525 LINK arbitration 00:02:42.525 LINK reconnect 00:02:42.525 LINK abort 00:02:42.525 LINK hello_blob 00:02:42.525 LINK nvme_manage 00:02:42.525 LINK hello_fsdev 00:02:42.525 LINK accel_perf 00:02:42.784 LINK blobcli 00:02:42.784 LINK cuse 00:02:43.042 CC test/bdev/bdevio/bdevio.o 00:02:43.042 CC examples/bdev/bdevperf/bdevperf.o 00:02:43.042 CC examples/bdev/hello_world/hello_bdev.o 00:02:43.301 LINK bdevio 00:02:43.301 LINK hello_bdev 00:02:43.870 LINK bdevperf 00:02:44.129 CC examples/nvmf/nvmf/nvmf.o 00:02:44.389 LINK nvmf 00:02:45.770 LINK esnap 00:02:45.770 00:02:45.770 real 0m54.409s 00:02:45.770 user 7m40.100s 00:02:45.770 sys 4m7.857s 00:02:45.770 10:31:13 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:45.770 10:31:13 make -- common/autotest_common.sh@10 -- $ set +x 00:02:45.770 ************************************ 00:02:45.770 END TEST make 00:02:45.770 ************************************ 00:02:45.770 10:31:13 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:45.770 10:31:13 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:45.770 10:31:13 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:45.770 10:31:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:45.770 10:31:13 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:45.770 10:31:13 -- pm/common@44 -- $ pid=3513981 00:02:45.770 10:31:13 -- pm/common@50 -- $ kill -TERM 3513981 00:02:45.770 10:31:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:45.770 10:31:13 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:45.770 10:31:13 -- pm/common@44 -- $ pid=3513983 00:02:45.770 10:31:13 -- pm/common@50 -- $ kill -TERM 3513983 00:02:45.770 10:31:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:45.770 10:31:13 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:45.770 10:31:13 -- pm/common@44 -- $ pid=3513984 00:02:45.770 10:31:13 -- pm/common@50 -- $ kill -TERM 3513984 00:02:45.770 10:31:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:45.770 10:31:13 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:45.770 10:31:13 -- pm/common@44 -- $ pid=3514008 00:02:45.770 10:31:13 -- pm/common@50 -- $ sudo -E kill -TERM 3514008 00:02:45.770 10:31:13 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:45.770 10:31:13 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:02:46.029 10:31:13 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:02:46.029 10:31:13 -- common/autotest_common.sh@1691 -- # lcov --version 00:02:46.029 10:31:13 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:02:46.029 10:31:13 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:02:46.029 10:31:13 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:46.029 10:31:13 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:46.029 10:31:13 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:46.029 10:31:13 -- scripts/common.sh@336 -- # IFS=.-: 00:02:46.029 10:31:13 -- scripts/common.sh@336 -- # read -ra ver1 00:02:46.029 10:31:13 -- scripts/common.sh@337 -- # IFS=.-: 00:02:46.029 10:31:13 -- scripts/common.sh@337 -- # read -ra ver2 00:02:46.029 10:31:13 -- scripts/common.sh@338 -- # local 'op=<' 00:02:46.029 10:31:13 -- scripts/common.sh@340 -- # ver1_l=2 00:02:46.029 10:31:13 -- scripts/common.sh@341 -- # ver2_l=1 00:02:46.029 10:31:13 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:46.029 10:31:13 -- scripts/common.sh@344 -- # case "$op" in 00:02:46.029 10:31:13 -- scripts/common.sh@345 -- # : 1 00:02:46.029 10:31:13 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:46.029 10:31:13 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:46.029 10:31:13 -- scripts/common.sh@365 -- # decimal 1 00:02:46.029 10:31:13 -- scripts/common.sh@353 -- # local d=1 00:02:46.029 10:31:13 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:46.029 10:31:13 -- scripts/common.sh@355 -- # echo 1 00:02:46.029 10:31:13 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:46.029 10:31:13 -- scripts/common.sh@366 -- # decimal 2 00:02:46.029 10:31:13 -- scripts/common.sh@353 -- # local d=2 00:02:46.029 10:31:13 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:46.029 10:31:13 -- scripts/common.sh@355 -- # echo 2 00:02:46.029 10:31:13 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:46.029 10:31:13 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:46.029 10:31:13 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:46.030 10:31:13 -- scripts/common.sh@368 -- # return 0 00:02:46.030 10:31:13 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:46.030 10:31:13 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:02:46.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:46.030 --rc genhtml_branch_coverage=1 00:02:46.030 --rc genhtml_function_coverage=1 00:02:46.030 --rc genhtml_legend=1 00:02:46.030 --rc geninfo_all_blocks=1 00:02:46.030 --rc geninfo_unexecuted_blocks=1 00:02:46.030 00:02:46.030 ' 00:02:46.030 10:31:13 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:02:46.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:46.030 --rc genhtml_branch_coverage=1 00:02:46.030 --rc genhtml_function_coverage=1 00:02:46.030 --rc genhtml_legend=1 00:02:46.030 --rc geninfo_all_blocks=1 00:02:46.030 --rc geninfo_unexecuted_blocks=1 00:02:46.030 00:02:46.030 ' 00:02:46.030 10:31:13 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:02:46.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:46.030 --rc genhtml_branch_coverage=1 00:02:46.030 --rc genhtml_function_coverage=1 00:02:46.030 --rc genhtml_legend=1 00:02:46.030 --rc geninfo_all_blocks=1 00:02:46.030 --rc geninfo_unexecuted_blocks=1 00:02:46.030 00:02:46.030 ' 00:02:46.030 10:31:13 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:02:46.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:46.030 --rc genhtml_branch_coverage=1 00:02:46.030 --rc genhtml_function_coverage=1 00:02:46.030 --rc genhtml_legend=1 00:02:46.030 --rc geninfo_all_blocks=1 00:02:46.030 --rc geninfo_unexecuted_blocks=1 00:02:46.030 00:02:46.030 ' 00:02:46.030 10:31:13 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:02:46.030 10:31:13 -- nvmf/common.sh@7 -- # uname -s 00:02:46.030 10:31:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:46.030 10:31:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:46.030 10:31:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:46.030 10:31:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:46.030 10:31:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:46.030 10:31:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:46.030 10:31:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:46.030 10:31:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:46.030 10:31:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:46.030 10:31:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:46.030 10:31:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:02:46.030 10:31:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:02:46.030 10:31:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:46.030 10:31:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:46.030 10:31:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:46.030 10:31:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:46.030 10:31:13 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:02:46.030 10:31:13 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:46.030 10:31:13 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:46.030 10:31:13 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:46.030 10:31:13 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:46.030 10:31:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:46.030 10:31:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:46.030 10:31:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:46.030 10:31:13 -- paths/export.sh@5 -- # export PATH 00:02:46.030 10:31:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:46.030 10:31:13 -- nvmf/common.sh@51 -- # : 0 00:02:46.030 10:31:13 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:46.030 10:31:13 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:46.030 10:31:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:46.030 10:31:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:46.030 10:31:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:46.030 10:31:13 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:46.030 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:46.030 10:31:13 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:46.030 10:31:13 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:46.030 10:31:13 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:46.030 10:31:13 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:46.030 10:31:13 -- spdk/autotest.sh@32 -- # uname -s 00:02:46.030 10:31:13 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:46.030 10:31:13 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:46.030 10:31:13 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:46.030 10:31:13 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:46.030 10:31:13 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:46.030 10:31:13 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:46.030 10:31:13 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:46.030 10:31:13 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:46.030 10:31:13 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:46.030 10:31:13 -- spdk/autotest.sh@48 -- # udevadm_pid=3577377 00:02:46.030 10:31:13 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:46.030 10:31:13 -- pm/common@17 -- # local monitor 00:02:46.030 10:31:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.030 10:31:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.030 10:31:13 -- pm/common@21 -- # date +%s 00:02:46.030 10:31:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.030 10:31:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.030 10:31:13 -- pm/common@21 -- # date +%s 00:02:46.030 10:31:13 -- pm/common@25 -- # sleep 1 00:02:46.030 10:31:13 -- pm/common@21 -- # date +%s 00:02:46.030 10:31:13 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730971873 00:02:46.030 10:31:13 -- pm/common@21 -- # date +%s 00:02:46.030 10:31:13 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730971873 00:02:46.030 10:31:13 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730971873 00:02:46.030 10:31:13 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730971873 00:02:46.030 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730971873_collect-vmstat.pm.log 00:02:46.030 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730971873_collect-cpu-load.pm.log 00:02:46.030 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730971873_collect-cpu-temp.pm.log 00:02:46.289 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730971873_collect-bmc-pm.bmc.pm.log 00:02:47.228 10:31:14 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:47.228 10:31:14 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:47.228 10:31:14 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:47.228 10:31:14 -- common/autotest_common.sh@10 -- # set +x 00:02:47.228 10:31:14 -- spdk/autotest.sh@59 -- # create_test_list 00:02:47.228 10:31:14 -- common/autotest_common.sh@750 -- # xtrace_disable 00:02:47.228 10:31:14 -- common/autotest_common.sh@10 -- # set +x 00:02:47.228 10:31:14 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:02:47.228 10:31:14 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:47.228 10:31:14 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:47.228 10:31:14 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:02:47.228 10:31:14 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:47.228 10:31:14 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:47.228 10:31:14 -- common/autotest_common.sh@1455 -- # uname 00:02:47.228 10:31:14 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:47.228 10:31:14 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:47.228 10:31:14 -- common/autotest_common.sh@1475 -- # uname 00:02:47.228 10:31:14 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:47.228 10:31:14 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:47.228 10:31:14 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:47.228 lcov: LCOV version 1.15 00:02:47.228 10:31:14 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:03:05.322 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:05.322 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:11.893 10:31:39 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:11.893 10:31:39 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:11.893 10:31:39 -- common/autotest_common.sh@10 -- # set +x 00:03:11.893 10:31:39 -- spdk/autotest.sh@78 -- # rm -f 00:03:11.893 10:31:39 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:15.183 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:15.183 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:15.183 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:15.183 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:15.183 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:15.183 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:15.475 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:15.475 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:15.475 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:15.475 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:15.475 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:15.475 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:15.475 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:15.475 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:15.475 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:15.475 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:15.735 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:03:15.735 10:31:43 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:15.735 10:31:43 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:15.735 10:31:43 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:15.735 10:31:43 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:15.735 10:31:43 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:15.735 10:31:43 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:15.735 10:31:43 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:15.735 10:31:43 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:15.735 10:31:43 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:15.735 10:31:43 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:15.735 10:31:43 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:15.735 10:31:43 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:15.735 10:31:43 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:15.735 10:31:43 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:15.735 10:31:43 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:15.735 No valid GPT data, bailing 00:03:15.735 10:31:43 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:15.735 10:31:43 -- scripts/common.sh@394 -- # pt= 00:03:15.735 10:31:43 -- scripts/common.sh@395 -- # return 1 00:03:15.735 10:31:43 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:15.735 1+0 records in 00:03:15.735 1+0 records out 00:03:15.735 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00511159 s, 205 MB/s 00:03:15.735 10:31:43 -- spdk/autotest.sh@105 -- # sync 00:03:15.735 10:31:43 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:15.735 10:31:43 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:15.735 10:31:43 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:22.303 10:31:49 -- spdk/autotest.sh@111 -- # uname -s 00:03:22.303 10:31:49 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:22.303 10:31:49 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:22.303 10:31:49 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:03:25.588 Hugepages 00:03:25.588 node hugesize free / total 00:03:25.588 node0 1048576kB 0 / 0 00:03:25.588 node0 2048kB 0 / 0 00:03:25.588 node1 1048576kB 0 / 0 00:03:25.588 node1 2048kB 0 / 0 00:03:25.588 00:03:25.588 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:25.588 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:25.588 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:25.588 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:25.588 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:25.588 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:25.588 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:25.588 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:25.588 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:25.588 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:25.588 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:25.588 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:25.588 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:25.588 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:25.588 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:25.588 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:25.588 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:25.847 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:25.847 10:31:53 -- spdk/autotest.sh@117 -- # uname -s 00:03:25.847 10:31:53 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:25.847 10:31:53 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:25.847 10:31:53 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:29.172 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:29.172 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:29.172 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:29.172 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:29.172 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:29.172 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:29.172 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:29.172 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:29.172 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:29.172 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:29.172 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:29.172 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:29.172 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:29.172 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:29.172 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:29.172 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:31.080 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:31.080 10:31:58 -- common/autotest_common.sh@1515 -- # sleep 1 00:03:32.461 10:31:59 -- common/autotest_common.sh@1516 -- # bdfs=() 00:03:32.461 10:31:59 -- common/autotest_common.sh@1516 -- # local bdfs 00:03:32.461 10:31:59 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:03:32.461 10:31:59 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:03:32.461 10:31:59 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:32.461 10:31:59 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:32.461 10:31:59 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:32.461 10:31:59 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:32.461 10:31:59 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:32.461 10:31:59 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:32.461 10:31:59 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:d8:00.0 00:03:32.461 10:31:59 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:34.998 Waiting for block devices as requested 00:03:34.998 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:35.258 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:35.258 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:35.258 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:35.518 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:35.518 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:35.518 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:35.518 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:35.778 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:35.778 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:35.778 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:36.037 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:36.037 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:36.037 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:36.297 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:36.297 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:36.297 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:03:36.557 10:32:04 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:36.557 10:32:04 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:03:36.557 10:32:04 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:03:36.557 10:32:04 -- common/autotest_common.sh@1485 -- # grep 0000:d8:00.0/nvme/nvme 00:03:36.557 10:32:04 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:03:36.557 10:32:04 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:03:36.557 10:32:04 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:03:36.557 10:32:04 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:03:36.557 10:32:04 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:03:36.557 10:32:04 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:03:36.557 10:32:04 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:03:36.557 10:32:04 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:36.557 10:32:04 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:36.557 10:32:04 -- common/autotest_common.sh@1529 -- # oacs=' 0xe' 00:03:36.557 10:32:04 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:36.557 10:32:04 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:36.557 10:32:04 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:36.557 10:32:04 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:03:36.557 10:32:04 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:36.557 10:32:04 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:36.557 10:32:04 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:36.557 10:32:04 -- common/autotest_common.sh@1541 -- # continue 00:03:36.557 10:32:04 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:36.557 10:32:04 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:36.557 10:32:04 -- common/autotest_common.sh@10 -- # set +x 00:03:36.557 10:32:04 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:36.557 10:32:04 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:36.557 10:32:04 -- common/autotest_common.sh@10 -- # set +x 00:03:36.557 10:32:04 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:39.850 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:39.850 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:39.850 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:39.850 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:39.850 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:39.850 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:39.850 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:39.850 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:39.850 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:39.850 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:39.850 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:39.850 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:39.850 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:39.850 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:39.850 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:39.850 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:41.757 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:41.757 10:32:09 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:41.757 10:32:09 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:41.757 10:32:09 -- common/autotest_common.sh@10 -- # set +x 00:03:41.757 10:32:09 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:41.757 10:32:09 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:03:41.757 10:32:09 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:03:41.757 10:32:09 -- common/autotest_common.sh@1561 -- # bdfs=() 00:03:41.757 10:32:09 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:03:41.757 10:32:09 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:03:41.757 10:32:09 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:03:41.757 10:32:09 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:03:41.757 10:32:09 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:41.757 10:32:09 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:41.757 10:32:09 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:41.757 10:32:09 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:41.757 10:32:09 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:41.757 10:32:09 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:41.757 10:32:09 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:d8:00.0 00:03:41.757 10:32:09 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:41.757 10:32:09 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:03:41.757 10:32:09 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:03:41.757 10:32:09 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:41.757 10:32:09 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:03:41.757 10:32:09 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:03:41.757 10:32:09 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:d8:00.0 00:03:41.757 10:32:09 -- common/autotest_common.sh@1577 -- # [[ -z 0000:d8:00.0 ]] 00:03:41.757 10:32:09 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=3592816 00:03:41.757 10:32:09 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:41.757 10:32:09 -- common/autotest_common.sh@1583 -- # waitforlisten 3592816 00:03:41.757 10:32:09 -- common/autotest_common.sh@833 -- # '[' -z 3592816 ']' 00:03:41.757 10:32:09 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:41.757 10:32:09 -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:41.757 10:32:09 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:41.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:41.757 10:32:09 -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:41.757 10:32:09 -- common/autotest_common.sh@10 -- # set +x 00:03:41.757 [2024-11-07 10:32:09.405924] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:03:41.757 [2024-11-07 10:32:09.405973] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3592816 ] 00:03:42.017 [2024-11-07 10:32:09.481979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:42.017 [2024-11-07 10:32:09.520465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:42.277 10:32:09 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:42.277 10:32:09 -- common/autotest_common.sh@866 -- # return 0 00:03:42.277 10:32:09 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:03:42.277 10:32:09 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:03:42.277 10:32:09 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:03:45.566 nvme0n1 00:03:45.566 10:32:12 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:45.566 [2024-11-07 10:32:12.921290] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:03:45.566 request: 00:03:45.566 { 00:03:45.566 "nvme_ctrlr_name": "nvme0", 00:03:45.566 "password": "test", 00:03:45.566 "method": "bdev_nvme_opal_revert", 00:03:45.566 "req_id": 1 00:03:45.566 } 00:03:45.566 Got JSON-RPC error response 00:03:45.566 response: 00:03:45.566 { 00:03:45.566 "code": -32602, 00:03:45.566 "message": "Invalid parameters" 00:03:45.566 } 00:03:45.566 10:32:12 -- common/autotest_common.sh@1589 -- # true 00:03:45.566 10:32:12 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:03:45.566 10:32:12 -- common/autotest_common.sh@1593 -- # killprocess 3592816 00:03:45.566 10:32:12 -- common/autotest_common.sh@952 -- # '[' -z 3592816 ']' 00:03:45.566 10:32:12 -- common/autotest_common.sh@956 -- # kill -0 3592816 00:03:45.566 10:32:12 -- common/autotest_common.sh@957 -- # uname 00:03:45.566 10:32:12 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:45.566 10:32:12 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3592816 00:03:45.566 10:32:13 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:45.566 10:32:13 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:45.566 10:32:13 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3592816' 00:03:45.566 killing process with pid 3592816 00:03:45.566 10:32:13 -- common/autotest_common.sh@971 -- # kill 3592816 00:03:45.566 10:32:13 -- common/autotest_common.sh@976 -- # wait 3592816 00:03:48.100 10:32:15 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:48.100 10:32:15 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:48.100 10:32:15 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:48.100 10:32:15 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:48.100 10:32:15 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:48.100 10:32:15 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:48.100 10:32:15 -- common/autotest_common.sh@10 -- # set +x 00:03:48.100 10:32:15 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:48.100 10:32:15 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:03:48.100 10:32:15 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:48.100 10:32:15 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:48.100 10:32:15 -- common/autotest_common.sh@10 -- # set +x 00:03:48.100 ************************************ 00:03:48.100 START TEST env 00:03:48.100 ************************************ 00:03:48.100 10:32:15 env -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:03:48.100 * Looking for test storage... 00:03:48.100 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:03:48.100 10:32:15 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:48.100 10:32:15 env -- common/autotest_common.sh@1691 -- # lcov --version 00:03:48.100 10:32:15 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:48.360 10:32:15 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:48.360 10:32:15 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:48.360 10:32:15 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:48.360 10:32:15 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:48.360 10:32:15 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:48.360 10:32:15 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:48.360 10:32:15 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:48.360 10:32:15 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:48.360 10:32:15 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:48.360 10:32:15 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:48.360 10:32:15 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:48.360 10:32:15 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:48.360 10:32:15 env -- scripts/common.sh@344 -- # case "$op" in 00:03:48.360 10:32:15 env -- scripts/common.sh@345 -- # : 1 00:03:48.360 10:32:15 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:48.360 10:32:15 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:48.360 10:32:15 env -- scripts/common.sh@365 -- # decimal 1 00:03:48.360 10:32:15 env -- scripts/common.sh@353 -- # local d=1 00:03:48.360 10:32:15 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:48.360 10:32:15 env -- scripts/common.sh@355 -- # echo 1 00:03:48.360 10:32:15 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:48.360 10:32:15 env -- scripts/common.sh@366 -- # decimal 2 00:03:48.360 10:32:15 env -- scripts/common.sh@353 -- # local d=2 00:03:48.360 10:32:15 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:48.360 10:32:15 env -- scripts/common.sh@355 -- # echo 2 00:03:48.360 10:32:15 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:48.360 10:32:15 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:48.360 10:32:15 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:48.360 10:32:15 env -- scripts/common.sh@368 -- # return 0 00:03:48.360 10:32:15 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:48.360 10:32:15 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:48.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.360 --rc genhtml_branch_coverage=1 00:03:48.360 --rc genhtml_function_coverage=1 00:03:48.360 --rc genhtml_legend=1 00:03:48.360 --rc geninfo_all_blocks=1 00:03:48.360 --rc geninfo_unexecuted_blocks=1 00:03:48.360 00:03:48.360 ' 00:03:48.360 10:32:15 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:48.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.360 --rc genhtml_branch_coverage=1 00:03:48.360 --rc genhtml_function_coverage=1 00:03:48.360 --rc genhtml_legend=1 00:03:48.360 --rc geninfo_all_blocks=1 00:03:48.360 --rc geninfo_unexecuted_blocks=1 00:03:48.360 00:03:48.360 ' 00:03:48.360 10:32:15 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:48.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.360 --rc genhtml_branch_coverage=1 00:03:48.360 --rc genhtml_function_coverage=1 00:03:48.360 --rc genhtml_legend=1 00:03:48.360 --rc geninfo_all_blocks=1 00:03:48.360 --rc geninfo_unexecuted_blocks=1 00:03:48.360 00:03:48.360 ' 00:03:48.360 10:32:15 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:48.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.360 --rc genhtml_branch_coverage=1 00:03:48.360 --rc genhtml_function_coverage=1 00:03:48.360 --rc genhtml_legend=1 00:03:48.360 --rc geninfo_all_blocks=1 00:03:48.360 --rc geninfo_unexecuted_blocks=1 00:03:48.360 00:03:48.360 ' 00:03:48.360 10:32:15 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:03:48.360 10:32:15 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:48.360 10:32:15 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:48.360 10:32:15 env -- common/autotest_common.sh@10 -- # set +x 00:03:48.360 ************************************ 00:03:48.360 START TEST env_memory 00:03:48.360 ************************************ 00:03:48.360 10:32:15 env.env_memory -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:03:48.360 00:03:48.360 00:03:48.360 CUnit - A unit testing framework for C - Version 2.1-3 00:03:48.360 http://cunit.sourceforge.net/ 00:03:48.360 00:03:48.360 00:03:48.360 Suite: memory 00:03:48.360 Test: alloc and free memory map ...[2024-11-07 10:32:15.902497] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:48.360 passed 00:03:48.360 Test: mem map translation ...[2024-11-07 10:32:15.922186] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:48.360 [2024-11-07 10:32:15.922203] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:48.360 [2024-11-07 10:32:15.922238] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:48.360 [2024-11-07 10:32:15.922246] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:48.360 passed 00:03:48.360 Test: mem map registration ...[2024-11-07 10:32:15.961942] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:48.360 [2024-11-07 10:32:15.961959] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:48.360 passed 00:03:48.360 Test: mem map adjacent registrations ...passed 00:03:48.360 00:03:48.360 Run Summary: Type Total Ran Passed Failed Inactive 00:03:48.360 suites 1 1 n/a 0 0 00:03:48.360 tests 4 4 4 0 0 00:03:48.360 asserts 152 152 152 0 n/a 00:03:48.360 00:03:48.360 Elapsed time = 0.140 seconds 00:03:48.360 00:03:48.360 real 0m0.154s 00:03:48.360 user 0m0.138s 00:03:48.360 sys 0m0.016s 00:03:48.360 10:32:16 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:48.360 10:32:16 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:48.360 ************************************ 00:03:48.360 END TEST env_memory 00:03:48.360 ************************************ 00:03:48.621 10:32:16 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:48.621 10:32:16 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:48.621 10:32:16 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:48.621 10:32:16 env -- common/autotest_common.sh@10 -- # set +x 00:03:48.621 ************************************ 00:03:48.621 START TEST env_vtophys 00:03:48.621 ************************************ 00:03:48.621 10:32:16 env.env_vtophys -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:48.621 EAL: lib.eal log level changed from notice to debug 00:03:48.621 EAL: Detected lcore 0 as core 0 on socket 0 00:03:48.621 EAL: Detected lcore 1 as core 1 on socket 0 00:03:48.621 EAL: Detected lcore 2 as core 2 on socket 0 00:03:48.621 EAL: Detected lcore 3 as core 3 on socket 0 00:03:48.621 EAL: Detected lcore 4 as core 4 on socket 0 00:03:48.621 EAL: Detected lcore 5 as core 5 on socket 0 00:03:48.621 EAL: Detected lcore 6 as core 6 on socket 0 00:03:48.621 EAL: Detected lcore 7 as core 8 on socket 0 00:03:48.621 EAL: Detected lcore 8 as core 9 on socket 0 00:03:48.621 EAL: Detected lcore 9 as core 10 on socket 0 00:03:48.621 EAL: Detected lcore 10 as core 11 on socket 0 00:03:48.621 EAL: Detected lcore 11 as core 12 on socket 0 00:03:48.621 EAL: Detected lcore 12 as core 13 on socket 0 00:03:48.621 EAL: Detected lcore 13 as core 14 on socket 0 00:03:48.621 EAL: Detected lcore 14 as core 16 on socket 0 00:03:48.621 EAL: Detected lcore 15 as core 17 on socket 0 00:03:48.621 EAL: Detected lcore 16 as core 18 on socket 0 00:03:48.621 EAL: Detected lcore 17 as core 19 on socket 0 00:03:48.621 EAL: Detected lcore 18 as core 20 on socket 0 00:03:48.621 EAL: Detected lcore 19 as core 21 on socket 0 00:03:48.621 EAL: Detected lcore 20 as core 22 on socket 0 00:03:48.621 EAL: Detected lcore 21 as core 24 on socket 0 00:03:48.621 EAL: Detected lcore 22 as core 25 on socket 0 00:03:48.621 EAL: Detected lcore 23 as core 26 on socket 0 00:03:48.621 EAL: Detected lcore 24 as core 27 on socket 0 00:03:48.621 EAL: Detected lcore 25 as core 28 on socket 0 00:03:48.621 EAL: Detected lcore 26 as core 29 on socket 0 00:03:48.621 EAL: Detected lcore 27 as core 30 on socket 0 00:03:48.621 EAL: Detected lcore 28 as core 0 on socket 1 00:03:48.621 EAL: Detected lcore 29 as core 1 on socket 1 00:03:48.621 EAL: Detected lcore 30 as core 2 on socket 1 00:03:48.621 EAL: Detected lcore 31 as core 3 on socket 1 00:03:48.621 EAL: Detected lcore 32 as core 4 on socket 1 00:03:48.621 EAL: Detected lcore 33 as core 5 on socket 1 00:03:48.621 EAL: Detected lcore 34 as core 6 on socket 1 00:03:48.621 EAL: Detected lcore 35 as core 8 on socket 1 00:03:48.621 EAL: Detected lcore 36 as core 9 on socket 1 00:03:48.621 EAL: Detected lcore 37 as core 10 on socket 1 00:03:48.621 EAL: Detected lcore 38 as core 11 on socket 1 00:03:48.622 EAL: Detected lcore 39 as core 12 on socket 1 00:03:48.622 EAL: Detected lcore 40 as core 13 on socket 1 00:03:48.622 EAL: Detected lcore 41 as core 14 on socket 1 00:03:48.622 EAL: Detected lcore 42 as core 16 on socket 1 00:03:48.622 EAL: Detected lcore 43 as core 17 on socket 1 00:03:48.622 EAL: Detected lcore 44 as core 18 on socket 1 00:03:48.622 EAL: Detected lcore 45 as core 19 on socket 1 00:03:48.622 EAL: Detected lcore 46 as core 20 on socket 1 00:03:48.622 EAL: Detected lcore 47 as core 21 on socket 1 00:03:48.622 EAL: Detected lcore 48 as core 22 on socket 1 00:03:48.622 EAL: Detected lcore 49 as core 24 on socket 1 00:03:48.622 EAL: Detected lcore 50 as core 25 on socket 1 00:03:48.622 EAL: Detected lcore 51 as core 26 on socket 1 00:03:48.622 EAL: Detected lcore 52 as core 27 on socket 1 00:03:48.622 EAL: Detected lcore 53 as core 28 on socket 1 00:03:48.622 EAL: Detected lcore 54 as core 29 on socket 1 00:03:48.622 EAL: Detected lcore 55 as core 30 on socket 1 00:03:48.622 EAL: Detected lcore 56 as core 0 on socket 0 00:03:48.622 EAL: Detected lcore 57 as core 1 on socket 0 00:03:48.622 EAL: Detected lcore 58 as core 2 on socket 0 00:03:48.622 EAL: Detected lcore 59 as core 3 on socket 0 00:03:48.622 EAL: Detected lcore 60 as core 4 on socket 0 00:03:48.622 EAL: Detected lcore 61 as core 5 on socket 0 00:03:48.622 EAL: Detected lcore 62 as core 6 on socket 0 00:03:48.622 EAL: Detected lcore 63 as core 8 on socket 0 00:03:48.622 EAL: Detected lcore 64 as core 9 on socket 0 00:03:48.622 EAL: Detected lcore 65 as core 10 on socket 0 00:03:48.622 EAL: Detected lcore 66 as core 11 on socket 0 00:03:48.622 EAL: Detected lcore 67 as core 12 on socket 0 00:03:48.622 EAL: Detected lcore 68 as core 13 on socket 0 00:03:48.622 EAL: Detected lcore 69 as core 14 on socket 0 00:03:48.622 EAL: Detected lcore 70 as core 16 on socket 0 00:03:48.622 EAL: Detected lcore 71 as core 17 on socket 0 00:03:48.622 EAL: Detected lcore 72 as core 18 on socket 0 00:03:48.622 EAL: Detected lcore 73 as core 19 on socket 0 00:03:48.622 EAL: Detected lcore 74 as core 20 on socket 0 00:03:48.622 EAL: Detected lcore 75 as core 21 on socket 0 00:03:48.622 EAL: Detected lcore 76 as core 22 on socket 0 00:03:48.622 EAL: Detected lcore 77 as core 24 on socket 0 00:03:48.622 EAL: Detected lcore 78 as core 25 on socket 0 00:03:48.622 EAL: Detected lcore 79 as core 26 on socket 0 00:03:48.622 EAL: Detected lcore 80 as core 27 on socket 0 00:03:48.622 EAL: Detected lcore 81 as core 28 on socket 0 00:03:48.622 EAL: Detected lcore 82 as core 29 on socket 0 00:03:48.622 EAL: Detected lcore 83 as core 30 on socket 0 00:03:48.622 EAL: Detected lcore 84 as core 0 on socket 1 00:03:48.622 EAL: Detected lcore 85 as core 1 on socket 1 00:03:48.622 EAL: Detected lcore 86 as core 2 on socket 1 00:03:48.622 EAL: Detected lcore 87 as core 3 on socket 1 00:03:48.622 EAL: Detected lcore 88 as core 4 on socket 1 00:03:48.622 EAL: Detected lcore 89 as core 5 on socket 1 00:03:48.622 EAL: Detected lcore 90 as core 6 on socket 1 00:03:48.622 EAL: Detected lcore 91 as core 8 on socket 1 00:03:48.622 EAL: Detected lcore 92 as core 9 on socket 1 00:03:48.622 EAL: Detected lcore 93 as core 10 on socket 1 00:03:48.622 EAL: Detected lcore 94 as core 11 on socket 1 00:03:48.622 EAL: Detected lcore 95 as core 12 on socket 1 00:03:48.622 EAL: Detected lcore 96 as core 13 on socket 1 00:03:48.622 EAL: Detected lcore 97 as core 14 on socket 1 00:03:48.622 EAL: Detected lcore 98 as core 16 on socket 1 00:03:48.622 EAL: Detected lcore 99 as core 17 on socket 1 00:03:48.622 EAL: Detected lcore 100 as core 18 on socket 1 00:03:48.622 EAL: Detected lcore 101 as core 19 on socket 1 00:03:48.622 EAL: Detected lcore 102 as core 20 on socket 1 00:03:48.622 EAL: Detected lcore 103 as core 21 on socket 1 00:03:48.622 EAL: Detected lcore 104 as core 22 on socket 1 00:03:48.622 EAL: Detected lcore 105 as core 24 on socket 1 00:03:48.622 EAL: Detected lcore 106 as core 25 on socket 1 00:03:48.622 EAL: Detected lcore 107 as core 26 on socket 1 00:03:48.622 EAL: Detected lcore 108 as core 27 on socket 1 00:03:48.622 EAL: Detected lcore 109 as core 28 on socket 1 00:03:48.622 EAL: Detected lcore 110 as core 29 on socket 1 00:03:48.622 EAL: Detected lcore 111 as core 30 on socket 1 00:03:48.622 EAL: Maximum logical cores by configuration: 128 00:03:48.622 EAL: Detected CPU lcores: 112 00:03:48.622 EAL: Detected NUMA nodes: 2 00:03:48.622 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:48.622 EAL: Detected shared linkage of DPDK 00:03:48.622 EAL: No shared files mode enabled, IPC will be disabled 00:03:48.622 EAL: Bus pci wants IOVA as 'DC' 00:03:48.622 EAL: Buses did not request a specific IOVA mode. 00:03:48.622 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:48.622 EAL: Selected IOVA mode 'VA' 00:03:48.622 EAL: Probing VFIO support... 00:03:48.622 EAL: IOMMU type 1 (Type 1) is supported 00:03:48.622 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:48.622 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:48.622 EAL: VFIO support initialized 00:03:48.622 EAL: Ask a virtual area of 0x2e000 bytes 00:03:48.622 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:48.622 EAL: Setting up physically contiguous memory... 00:03:48.622 EAL: Setting maximum number of open files to 524288 00:03:48.622 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:48.622 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:48.622 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:48.622 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.622 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:48.622 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:48.622 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.622 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:48.622 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:48.622 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.622 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:48.622 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:48.622 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.622 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:48.622 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:48.622 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.622 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:48.622 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:48.622 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.622 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:48.622 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:48.622 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.622 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:48.622 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:48.622 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.622 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:48.622 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:48.622 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:48.622 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.622 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:48.622 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:48.622 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.622 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:48.622 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:48.622 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.622 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:48.622 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:48.622 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.622 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:48.622 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:48.622 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.622 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:48.622 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:48.622 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.622 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:48.622 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:48.622 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.622 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:48.622 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:48.622 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.622 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:48.622 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:48.622 EAL: Hugepages will be freed exactly as allocated. 00:03:48.622 EAL: No shared files mode enabled, IPC is disabled 00:03:48.622 EAL: No shared files mode enabled, IPC is disabled 00:03:48.622 EAL: TSC frequency is ~2500000 KHz 00:03:48.622 EAL: Main lcore 0 is ready (tid=7fba2d977a00;cpuset=[0]) 00:03:48.622 EAL: Trying to obtain current memory policy. 00:03:48.622 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.622 EAL: Restoring previous memory policy: 0 00:03:48.622 EAL: request: mp_malloc_sync 00:03:48.622 EAL: No shared files mode enabled, IPC is disabled 00:03:48.622 EAL: Heap on socket 0 was expanded by 2MB 00:03:48.622 EAL: No shared files mode enabled, IPC is disabled 00:03:48.622 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:48.622 EAL: Mem event callback 'spdk:(nil)' registered 00:03:48.622 00:03:48.622 00:03:48.622 CUnit - A unit testing framework for C - Version 2.1-3 00:03:48.622 http://cunit.sourceforge.net/ 00:03:48.622 00:03:48.622 00:03:48.622 Suite: components_suite 00:03:48.622 Test: vtophys_malloc_test ...passed 00:03:48.622 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:48.622 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.622 EAL: Restoring previous memory policy: 4 00:03:48.622 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.622 EAL: request: mp_malloc_sync 00:03:48.622 EAL: No shared files mode enabled, IPC is disabled 00:03:48.622 EAL: Heap on socket 0 was expanded by 4MB 00:03:48.622 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.622 EAL: request: mp_malloc_sync 00:03:48.622 EAL: No shared files mode enabled, IPC is disabled 00:03:48.622 EAL: Heap on socket 0 was shrunk by 4MB 00:03:48.622 EAL: Trying to obtain current memory policy. 00:03:48.622 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.622 EAL: Restoring previous memory policy: 4 00:03:48.622 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.622 EAL: request: mp_malloc_sync 00:03:48.622 EAL: No shared files mode enabled, IPC is disabled 00:03:48.622 EAL: Heap on socket 0 was expanded by 6MB 00:03:48.622 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.622 EAL: request: mp_malloc_sync 00:03:48.622 EAL: No shared files mode enabled, IPC is disabled 00:03:48.623 EAL: Heap on socket 0 was shrunk by 6MB 00:03:48.623 EAL: Trying to obtain current memory policy. 00:03:48.623 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.623 EAL: Restoring previous memory policy: 4 00:03:48.623 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.623 EAL: request: mp_malloc_sync 00:03:48.623 EAL: No shared files mode enabled, IPC is disabled 00:03:48.623 EAL: Heap on socket 0 was expanded by 10MB 00:03:48.623 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.623 EAL: request: mp_malloc_sync 00:03:48.623 EAL: No shared files mode enabled, IPC is disabled 00:03:48.623 EAL: Heap on socket 0 was shrunk by 10MB 00:03:48.623 EAL: Trying to obtain current memory policy. 00:03:48.623 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.623 EAL: Restoring previous memory policy: 4 00:03:48.623 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.623 EAL: request: mp_malloc_sync 00:03:48.623 EAL: No shared files mode enabled, IPC is disabled 00:03:48.623 EAL: Heap on socket 0 was expanded by 18MB 00:03:48.623 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.623 EAL: request: mp_malloc_sync 00:03:48.623 EAL: No shared files mode enabled, IPC is disabled 00:03:48.623 EAL: Heap on socket 0 was shrunk by 18MB 00:03:48.623 EAL: Trying to obtain current memory policy. 00:03:48.623 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.623 EAL: Restoring previous memory policy: 4 00:03:48.623 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.623 EAL: request: mp_malloc_sync 00:03:48.623 EAL: No shared files mode enabled, IPC is disabled 00:03:48.623 EAL: Heap on socket 0 was expanded by 34MB 00:03:48.623 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.623 EAL: request: mp_malloc_sync 00:03:48.623 EAL: No shared files mode enabled, IPC is disabled 00:03:48.623 EAL: Heap on socket 0 was shrunk by 34MB 00:03:48.623 EAL: Trying to obtain current memory policy. 00:03:48.623 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.623 EAL: Restoring previous memory policy: 4 00:03:48.623 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.623 EAL: request: mp_malloc_sync 00:03:48.623 EAL: No shared files mode enabled, IPC is disabled 00:03:48.623 EAL: Heap on socket 0 was expanded by 66MB 00:03:48.623 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.623 EAL: request: mp_malloc_sync 00:03:48.623 EAL: No shared files mode enabled, IPC is disabled 00:03:48.623 EAL: Heap on socket 0 was shrunk by 66MB 00:03:48.623 EAL: Trying to obtain current memory policy. 00:03:48.623 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.623 EAL: Restoring previous memory policy: 4 00:03:48.623 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.623 EAL: request: mp_malloc_sync 00:03:48.623 EAL: No shared files mode enabled, IPC is disabled 00:03:48.623 EAL: Heap on socket 0 was expanded by 130MB 00:03:48.623 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.883 EAL: request: mp_malloc_sync 00:03:48.883 EAL: No shared files mode enabled, IPC is disabled 00:03:48.883 EAL: Heap on socket 0 was shrunk by 130MB 00:03:48.883 EAL: Trying to obtain current memory policy. 00:03:48.883 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.883 EAL: Restoring previous memory policy: 4 00:03:48.883 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.883 EAL: request: mp_malloc_sync 00:03:48.883 EAL: No shared files mode enabled, IPC is disabled 00:03:48.883 EAL: Heap on socket 0 was expanded by 258MB 00:03:48.883 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.883 EAL: request: mp_malloc_sync 00:03:48.883 EAL: No shared files mode enabled, IPC is disabled 00:03:48.883 EAL: Heap on socket 0 was shrunk by 258MB 00:03:48.883 EAL: Trying to obtain current memory policy. 00:03:48.883 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.883 EAL: Restoring previous memory policy: 4 00:03:48.883 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.883 EAL: request: mp_malloc_sync 00:03:48.883 EAL: No shared files mode enabled, IPC is disabled 00:03:48.883 EAL: Heap on socket 0 was expanded by 514MB 00:03:49.141 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.141 EAL: request: mp_malloc_sync 00:03:49.141 EAL: No shared files mode enabled, IPC is disabled 00:03:49.141 EAL: Heap on socket 0 was shrunk by 514MB 00:03:49.141 EAL: Trying to obtain current memory policy. 00:03:49.141 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:49.400 EAL: Restoring previous memory policy: 4 00:03:49.400 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.400 EAL: request: mp_malloc_sync 00:03:49.400 EAL: No shared files mode enabled, IPC is disabled 00:03:49.400 EAL: Heap on socket 0 was expanded by 1026MB 00:03:49.400 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.660 EAL: request: mp_malloc_sync 00:03:49.660 EAL: No shared files mode enabled, IPC is disabled 00:03:49.660 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:49.660 passed 00:03:49.660 00:03:49.660 Run Summary: Type Total Ran Passed Failed Inactive 00:03:49.660 suites 1 1 n/a 0 0 00:03:49.660 tests 2 2 2 0 0 00:03:49.660 asserts 497 497 497 0 n/a 00:03:49.660 00:03:49.660 Elapsed time = 0.966 seconds 00:03:49.660 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.660 EAL: request: mp_malloc_sync 00:03:49.660 EAL: No shared files mode enabled, IPC is disabled 00:03:49.660 EAL: Heap on socket 0 was shrunk by 2MB 00:03:49.660 EAL: No shared files mode enabled, IPC is disabled 00:03:49.660 EAL: No shared files mode enabled, IPC is disabled 00:03:49.660 EAL: No shared files mode enabled, IPC is disabled 00:03:49.660 00:03:49.660 real 0m1.108s 00:03:49.660 user 0m0.649s 00:03:49.660 sys 0m0.424s 00:03:49.660 10:32:17 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:49.660 10:32:17 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:49.660 ************************************ 00:03:49.660 END TEST env_vtophys 00:03:49.660 ************************************ 00:03:49.660 10:32:17 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:03:49.660 10:32:17 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:49.660 10:32:17 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:49.660 10:32:17 env -- common/autotest_common.sh@10 -- # set +x 00:03:49.660 ************************************ 00:03:49.660 START TEST env_pci 00:03:49.660 ************************************ 00:03:49.660 10:32:17 env.env_pci -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:03:49.661 00:03:49.661 00:03:49.661 CUnit - A unit testing framework for C - Version 2.1-3 00:03:49.661 http://cunit.sourceforge.net/ 00:03:49.661 00:03:49.661 00:03:49.661 Suite: pci 00:03:49.661 Test: pci_hook ...[2024-11-07 10:32:17.306249] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3594369 has claimed it 00:03:49.920 EAL: Cannot find device (10000:00:01.0) 00:03:49.920 EAL: Failed to attach device on primary process 00:03:49.920 passed 00:03:49.920 00:03:49.920 Run Summary: Type Total Ran Passed Failed Inactive 00:03:49.920 suites 1 1 n/a 0 0 00:03:49.920 tests 1 1 1 0 0 00:03:49.920 asserts 25 25 25 0 n/a 00:03:49.920 00:03:49.920 Elapsed time = 0.034 seconds 00:03:49.920 00:03:49.920 real 0m0.056s 00:03:49.920 user 0m0.017s 00:03:49.920 sys 0m0.039s 00:03:49.920 10:32:17 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:49.920 10:32:17 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:49.920 ************************************ 00:03:49.920 END TEST env_pci 00:03:49.920 ************************************ 00:03:49.920 10:32:17 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:49.920 10:32:17 env -- env/env.sh@15 -- # uname 00:03:49.920 10:32:17 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:49.920 10:32:17 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:49.920 10:32:17 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:49.920 10:32:17 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:03:49.920 10:32:17 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:49.920 10:32:17 env -- common/autotest_common.sh@10 -- # set +x 00:03:49.920 ************************************ 00:03:49.920 START TEST env_dpdk_post_init 00:03:49.920 ************************************ 00:03:49.920 10:32:17 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:49.921 EAL: Detected CPU lcores: 112 00:03:49.921 EAL: Detected NUMA nodes: 2 00:03:49.921 EAL: Detected shared linkage of DPDK 00:03:49.921 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:49.921 EAL: Selected IOVA mode 'VA' 00:03:49.921 EAL: VFIO support initialized 00:03:49.921 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:49.921 EAL: Using IOMMU type 1 (Type 1) 00:03:49.921 EAL: Ignore mapping IO port bar(1) 00:03:49.921 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:03:50.180 EAL: Ignore mapping IO port bar(1) 00:03:50.180 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:03:50.180 EAL: Ignore mapping IO port bar(1) 00:03:50.180 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:03:50.180 EAL: Ignore mapping IO port bar(1) 00:03:50.180 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:03:50.180 EAL: Ignore mapping IO port bar(1) 00:03:50.180 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:03:50.180 EAL: Ignore mapping IO port bar(1) 00:03:50.180 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:03:50.180 EAL: Ignore mapping IO port bar(1) 00:03:50.180 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:03:50.180 EAL: Ignore mapping IO port bar(1) 00:03:50.180 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:03:50.180 EAL: Ignore mapping IO port bar(1) 00:03:50.180 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:03:50.180 EAL: Ignore mapping IO port bar(1) 00:03:50.180 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:03:50.180 EAL: Ignore mapping IO port bar(1) 00:03:50.180 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:03:50.180 EAL: Ignore mapping IO port bar(1) 00:03:50.180 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:03:50.180 EAL: Ignore mapping IO port bar(1) 00:03:50.180 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:03:50.180 EAL: Ignore mapping IO port bar(1) 00:03:50.180 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:03:50.180 EAL: Ignore mapping IO port bar(1) 00:03:50.180 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:03:50.180 EAL: Ignore mapping IO port bar(1) 00:03:50.180 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:03:51.117 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:03:55.311 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:03:55.311 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:03:55.311 Starting DPDK initialization... 00:03:55.311 Starting SPDK post initialization... 00:03:55.311 SPDK NVMe probe 00:03:55.311 Attaching to 0000:d8:00.0 00:03:55.311 Attached to 0000:d8:00.0 00:03:55.311 Cleaning up... 00:03:55.311 00:03:55.311 real 0m5.367s 00:03:55.311 user 0m3.740s 00:03:55.311 sys 0m0.683s 00:03:55.311 10:32:22 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:55.311 10:32:22 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:55.311 ************************************ 00:03:55.311 END TEST env_dpdk_post_init 00:03:55.311 ************************************ 00:03:55.311 10:32:22 env -- env/env.sh@26 -- # uname 00:03:55.311 10:32:22 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:55.311 10:32:22 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:55.311 10:32:22 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:55.311 10:32:22 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:55.311 10:32:22 env -- common/autotest_common.sh@10 -- # set +x 00:03:55.311 ************************************ 00:03:55.311 START TEST env_mem_callbacks 00:03:55.311 ************************************ 00:03:55.311 10:32:22 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:55.311 EAL: Detected CPU lcores: 112 00:03:55.311 EAL: Detected NUMA nodes: 2 00:03:55.311 EAL: Detected shared linkage of DPDK 00:03:55.311 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:55.311 EAL: Selected IOVA mode 'VA' 00:03:55.311 EAL: VFIO support initialized 00:03:55.311 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:55.311 00:03:55.311 00:03:55.311 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.311 http://cunit.sourceforge.net/ 00:03:55.311 00:03:55.311 00:03:55.311 Suite: memory 00:03:55.311 Test: test ... 00:03:55.311 register 0x200000200000 2097152 00:03:55.311 malloc 3145728 00:03:55.311 register 0x200000400000 4194304 00:03:55.311 buf 0x200000500000 len 3145728 PASSED 00:03:55.311 malloc 64 00:03:55.311 buf 0x2000004fff40 len 64 PASSED 00:03:55.311 malloc 4194304 00:03:55.311 register 0x200000800000 6291456 00:03:55.311 buf 0x200000a00000 len 4194304 PASSED 00:03:55.311 free 0x200000500000 3145728 00:03:55.311 free 0x2000004fff40 64 00:03:55.311 unregister 0x200000400000 4194304 PASSED 00:03:55.311 free 0x200000a00000 4194304 00:03:55.311 unregister 0x200000800000 6291456 PASSED 00:03:55.311 malloc 8388608 00:03:55.311 register 0x200000400000 10485760 00:03:55.311 buf 0x200000600000 len 8388608 PASSED 00:03:55.311 free 0x200000600000 8388608 00:03:55.311 unregister 0x200000400000 10485760 PASSED 00:03:55.311 passed 00:03:55.311 00:03:55.311 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.311 suites 1 1 n/a 0 0 00:03:55.311 tests 1 1 1 0 0 00:03:55.311 asserts 15 15 15 0 n/a 00:03:55.311 00:03:55.311 Elapsed time = 0.005 seconds 00:03:55.311 00:03:55.311 real 0m0.065s 00:03:55.311 user 0m0.023s 00:03:55.311 sys 0m0.042s 00:03:55.311 10:32:22 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:55.311 10:32:22 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:55.311 ************************************ 00:03:55.311 END TEST env_mem_callbacks 00:03:55.311 ************************************ 00:03:55.571 00:03:55.571 real 0m7.381s 00:03:55.571 user 0m4.819s 00:03:55.571 sys 0m1.629s 00:03:55.571 10:32:23 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:55.571 10:32:23 env -- common/autotest_common.sh@10 -- # set +x 00:03:55.571 ************************************ 00:03:55.571 END TEST env 00:03:55.571 ************************************ 00:03:55.571 10:32:23 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:03:55.571 10:32:23 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:55.571 10:32:23 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:55.571 10:32:23 -- common/autotest_common.sh@10 -- # set +x 00:03:55.571 ************************************ 00:03:55.571 START TEST rpc 00:03:55.571 ************************************ 00:03:55.571 10:32:23 rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:03:55.571 * Looking for test storage... 00:03:55.571 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:03:55.571 10:32:23 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:55.571 10:32:23 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:03:55.571 10:32:23 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:55.832 10:32:23 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:55.832 10:32:23 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:55.832 10:32:23 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:55.832 10:32:23 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:55.832 10:32:23 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:55.832 10:32:23 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:55.832 10:32:23 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:55.832 10:32:23 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:55.832 10:32:23 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:55.832 10:32:23 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:55.832 10:32:23 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:55.832 10:32:23 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:55.832 10:32:23 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:55.832 10:32:23 rpc -- scripts/common.sh@345 -- # : 1 00:03:55.832 10:32:23 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:55.832 10:32:23 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:55.832 10:32:23 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:55.832 10:32:23 rpc -- scripts/common.sh@353 -- # local d=1 00:03:55.832 10:32:23 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:55.832 10:32:23 rpc -- scripts/common.sh@355 -- # echo 1 00:03:55.832 10:32:23 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:55.832 10:32:23 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:55.832 10:32:23 rpc -- scripts/common.sh@353 -- # local d=2 00:03:55.832 10:32:23 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:55.832 10:32:23 rpc -- scripts/common.sh@355 -- # echo 2 00:03:55.832 10:32:23 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:55.832 10:32:23 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:55.832 10:32:23 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:55.832 10:32:23 rpc -- scripts/common.sh@368 -- # return 0 00:03:55.832 10:32:23 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:55.832 10:32:23 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:55.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.832 --rc genhtml_branch_coverage=1 00:03:55.832 --rc genhtml_function_coverage=1 00:03:55.832 --rc genhtml_legend=1 00:03:55.832 --rc geninfo_all_blocks=1 00:03:55.832 --rc geninfo_unexecuted_blocks=1 00:03:55.832 00:03:55.832 ' 00:03:55.832 10:32:23 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:55.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.832 --rc genhtml_branch_coverage=1 00:03:55.832 --rc genhtml_function_coverage=1 00:03:55.832 --rc genhtml_legend=1 00:03:55.832 --rc geninfo_all_blocks=1 00:03:55.832 --rc geninfo_unexecuted_blocks=1 00:03:55.832 00:03:55.832 ' 00:03:55.832 10:32:23 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:55.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.832 --rc genhtml_branch_coverage=1 00:03:55.832 --rc genhtml_function_coverage=1 00:03:55.832 --rc genhtml_legend=1 00:03:55.832 --rc geninfo_all_blocks=1 00:03:55.832 --rc geninfo_unexecuted_blocks=1 00:03:55.832 00:03:55.832 ' 00:03:55.832 10:32:23 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:55.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.832 --rc genhtml_branch_coverage=1 00:03:55.832 --rc genhtml_function_coverage=1 00:03:55.832 --rc genhtml_legend=1 00:03:55.832 --rc geninfo_all_blocks=1 00:03:55.832 --rc geninfo_unexecuted_blocks=1 00:03:55.832 00:03:55.832 ' 00:03:55.832 10:32:23 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3595562 00:03:55.832 10:32:23 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:55.832 10:32:23 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:55.832 10:32:23 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3595562 00:03:55.832 10:32:23 rpc -- common/autotest_common.sh@833 -- # '[' -z 3595562 ']' 00:03:55.832 10:32:23 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:55.832 10:32:23 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:55.832 10:32:23 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:55.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:55.832 10:32:23 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:55.832 10:32:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:55.832 [2024-11-07 10:32:23.347947] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:03:55.832 [2024-11-07 10:32:23.348006] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3595562 ] 00:03:55.832 [2024-11-07 10:32:23.424713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:55.832 [2024-11-07 10:32:23.464084] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:55.832 [2024-11-07 10:32:23.464124] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3595562' to capture a snapshot of events at runtime. 00:03:55.832 [2024-11-07 10:32:23.464133] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:55.832 [2024-11-07 10:32:23.464142] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:55.832 [2024-11-07 10:32:23.464148] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3595562 for offline analysis/debug. 00:03:55.832 [2024-11-07 10:32:23.464770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:56.092 10:32:23 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:56.092 10:32:23 rpc -- common/autotest_common.sh@866 -- # return 0 00:03:56.092 10:32:23 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:03:56.092 10:32:23 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:03:56.092 10:32:23 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:56.092 10:32:23 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:56.092 10:32:23 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:56.092 10:32:23 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:56.092 10:32:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.092 ************************************ 00:03:56.092 START TEST rpc_integrity 00:03:56.092 ************************************ 00:03:56.092 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:03:56.092 10:32:23 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:56.092 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.092 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.092 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:56.092 10:32:23 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:56.092 10:32:23 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:56.092 10:32:23 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:56.092 10:32:23 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:56.352 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.352 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.352 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:56.352 10:32:23 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:56.352 10:32:23 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:56.352 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.352 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.352 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:56.352 10:32:23 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:56.352 { 00:03:56.352 "name": "Malloc0", 00:03:56.352 "aliases": [ 00:03:56.352 "c3742295-ee17-4c14-9380-cf132f8b155c" 00:03:56.352 ], 00:03:56.352 "product_name": "Malloc disk", 00:03:56.352 "block_size": 512, 00:03:56.352 "num_blocks": 16384, 00:03:56.352 "uuid": "c3742295-ee17-4c14-9380-cf132f8b155c", 00:03:56.352 "assigned_rate_limits": { 00:03:56.352 "rw_ios_per_sec": 0, 00:03:56.352 "rw_mbytes_per_sec": 0, 00:03:56.352 "r_mbytes_per_sec": 0, 00:03:56.352 "w_mbytes_per_sec": 0 00:03:56.352 }, 00:03:56.352 "claimed": false, 00:03:56.352 "zoned": false, 00:03:56.352 "supported_io_types": { 00:03:56.352 "read": true, 00:03:56.352 "write": true, 00:03:56.352 "unmap": true, 00:03:56.352 "flush": true, 00:03:56.352 "reset": true, 00:03:56.352 "nvme_admin": false, 00:03:56.352 "nvme_io": false, 00:03:56.352 "nvme_io_md": false, 00:03:56.352 "write_zeroes": true, 00:03:56.352 "zcopy": true, 00:03:56.352 "get_zone_info": false, 00:03:56.352 "zone_management": false, 00:03:56.352 "zone_append": false, 00:03:56.352 "compare": false, 00:03:56.352 "compare_and_write": false, 00:03:56.352 "abort": true, 00:03:56.352 "seek_hole": false, 00:03:56.352 "seek_data": false, 00:03:56.352 "copy": true, 00:03:56.352 "nvme_iov_md": false 00:03:56.352 }, 00:03:56.352 "memory_domains": [ 00:03:56.352 { 00:03:56.352 "dma_device_id": "system", 00:03:56.352 "dma_device_type": 1 00:03:56.352 }, 00:03:56.352 { 00:03:56.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:56.352 "dma_device_type": 2 00:03:56.352 } 00:03:56.352 ], 00:03:56.352 "driver_specific": {} 00:03:56.352 } 00:03:56.352 ]' 00:03:56.352 10:32:23 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:56.352 10:32:23 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:56.352 10:32:23 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:56.352 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.352 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.352 [2024-11-07 10:32:23.847854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:56.352 [2024-11-07 10:32:23.847884] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:56.353 [2024-11-07 10:32:23.847897] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16e9610 00:03:56.353 [2024-11-07 10:32:23.847906] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:56.353 [2024-11-07 10:32:23.848998] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:56.353 [2024-11-07 10:32:23.849021] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:56.353 Passthru0 00:03:56.353 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:56.353 10:32:23 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:56.353 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.353 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.353 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:56.353 10:32:23 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:56.353 { 00:03:56.353 "name": "Malloc0", 00:03:56.353 "aliases": [ 00:03:56.353 "c3742295-ee17-4c14-9380-cf132f8b155c" 00:03:56.353 ], 00:03:56.353 "product_name": "Malloc disk", 00:03:56.353 "block_size": 512, 00:03:56.353 "num_blocks": 16384, 00:03:56.353 "uuid": "c3742295-ee17-4c14-9380-cf132f8b155c", 00:03:56.353 "assigned_rate_limits": { 00:03:56.353 "rw_ios_per_sec": 0, 00:03:56.353 "rw_mbytes_per_sec": 0, 00:03:56.353 "r_mbytes_per_sec": 0, 00:03:56.353 "w_mbytes_per_sec": 0 00:03:56.353 }, 00:03:56.353 "claimed": true, 00:03:56.353 "claim_type": "exclusive_write", 00:03:56.353 "zoned": false, 00:03:56.353 "supported_io_types": { 00:03:56.353 "read": true, 00:03:56.353 "write": true, 00:03:56.353 "unmap": true, 00:03:56.353 "flush": true, 00:03:56.353 "reset": true, 00:03:56.353 "nvme_admin": false, 00:03:56.353 "nvme_io": false, 00:03:56.353 "nvme_io_md": false, 00:03:56.353 "write_zeroes": true, 00:03:56.353 "zcopy": true, 00:03:56.353 "get_zone_info": false, 00:03:56.353 "zone_management": false, 00:03:56.353 "zone_append": false, 00:03:56.353 "compare": false, 00:03:56.353 "compare_and_write": false, 00:03:56.353 "abort": true, 00:03:56.353 "seek_hole": false, 00:03:56.353 "seek_data": false, 00:03:56.353 "copy": true, 00:03:56.353 "nvme_iov_md": false 00:03:56.353 }, 00:03:56.353 "memory_domains": [ 00:03:56.353 { 00:03:56.353 "dma_device_id": "system", 00:03:56.353 "dma_device_type": 1 00:03:56.353 }, 00:03:56.353 { 00:03:56.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:56.353 "dma_device_type": 2 00:03:56.353 } 00:03:56.353 ], 00:03:56.353 "driver_specific": {} 00:03:56.353 }, 00:03:56.353 { 00:03:56.353 "name": "Passthru0", 00:03:56.353 "aliases": [ 00:03:56.353 "57915cd7-639f-5d6e-914c-a2d7ff86482c" 00:03:56.353 ], 00:03:56.353 "product_name": "passthru", 00:03:56.353 "block_size": 512, 00:03:56.353 "num_blocks": 16384, 00:03:56.353 "uuid": "57915cd7-639f-5d6e-914c-a2d7ff86482c", 00:03:56.353 "assigned_rate_limits": { 00:03:56.353 "rw_ios_per_sec": 0, 00:03:56.353 "rw_mbytes_per_sec": 0, 00:03:56.353 "r_mbytes_per_sec": 0, 00:03:56.353 "w_mbytes_per_sec": 0 00:03:56.353 }, 00:03:56.353 "claimed": false, 00:03:56.353 "zoned": false, 00:03:56.353 "supported_io_types": { 00:03:56.353 "read": true, 00:03:56.353 "write": true, 00:03:56.353 "unmap": true, 00:03:56.353 "flush": true, 00:03:56.353 "reset": true, 00:03:56.353 "nvme_admin": false, 00:03:56.353 "nvme_io": false, 00:03:56.353 "nvme_io_md": false, 00:03:56.353 "write_zeroes": true, 00:03:56.353 "zcopy": true, 00:03:56.353 "get_zone_info": false, 00:03:56.353 "zone_management": false, 00:03:56.353 "zone_append": false, 00:03:56.353 "compare": false, 00:03:56.353 "compare_and_write": false, 00:03:56.353 "abort": true, 00:03:56.353 "seek_hole": false, 00:03:56.353 "seek_data": false, 00:03:56.353 "copy": true, 00:03:56.353 "nvme_iov_md": false 00:03:56.353 }, 00:03:56.353 "memory_domains": [ 00:03:56.353 { 00:03:56.353 "dma_device_id": "system", 00:03:56.353 "dma_device_type": 1 00:03:56.353 }, 00:03:56.353 { 00:03:56.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:56.353 "dma_device_type": 2 00:03:56.353 } 00:03:56.353 ], 00:03:56.353 "driver_specific": { 00:03:56.353 "passthru": { 00:03:56.353 "name": "Passthru0", 00:03:56.353 "base_bdev_name": "Malloc0" 00:03:56.353 } 00:03:56.353 } 00:03:56.353 } 00:03:56.353 ]' 00:03:56.353 10:32:23 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:56.353 10:32:23 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:56.353 10:32:23 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:56.353 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.353 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.353 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:56.353 10:32:23 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:56.353 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.353 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.353 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:56.353 10:32:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:56.353 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.353 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.353 10:32:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:56.353 10:32:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:56.353 10:32:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:56.353 10:32:24 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:56.353 00:03:56.353 real 0m0.292s 00:03:56.353 user 0m0.181s 00:03:56.353 sys 0m0.051s 00:03:56.353 10:32:24 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:56.353 10:32:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.353 ************************************ 00:03:56.353 END TEST rpc_integrity 00:03:56.353 ************************************ 00:03:56.612 10:32:24 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:56.612 10:32:24 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:56.612 10:32:24 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:56.612 10:32:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.612 ************************************ 00:03:56.612 START TEST rpc_plugins 00:03:56.612 ************************************ 00:03:56.612 10:32:24 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:03:56.612 10:32:24 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:56.612 10:32:24 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.612 10:32:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:56.612 10:32:24 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:56.613 10:32:24 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:56.613 10:32:24 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:56.613 10:32:24 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.613 10:32:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:56.613 10:32:24 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:56.613 10:32:24 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:56.613 { 00:03:56.613 "name": "Malloc1", 00:03:56.613 "aliases": [ 00:03:56.613 "e0eb3abb-0cfd-4a3f-92e9-685064695799" 00:03:56.613 ], 00:03:56.613 "product_name": "Malloc disk", 00:03:56.613 "block_size": 4096, 00:03:56.613 "num_blocks": 256, 00:03:56.613 "uuid": "e0eb3abb-0cfd-4a3f-92e9-685064695799", 00:03:56.613 "assigned_rate_limits": { 00:03:56.613 "rw_ios_per_sec": 0, 00:03:56.613 "rw_mbytes_per_sec": 0, 00:03:56.613 "r_mbytes_per_sec": 0, 00:03:56.613 "w_mbytes_per_sec": 0 00:03:56.613 }, 00:03:56.613 "claimed": false, 00:03:56.613 "zoned": false, 00:03:56.613 "supported_io_types": { 00:03:56.613 "read": true, 00:03:56.613 "write": true, 00:03:56.613 "unmap": true, 00:03:56.613 "flush": true, 00:03:56.613 "reset": true, 00:03:56.613 "nvme_admin": false, 00:03:56.613 "nvme_io": false, 00:03:56.613 "nvme_io_md": false, 00:03:56.613 "write_zeroes": true, 00:03:56.613 "zcopy": true, 00:03:56.613 "get_zone_info": false, 00:03:56.613 "zone_management": false, 00:03:56.613 "zone_append": false, 00:03:56.613 "compare": false, 00:03:56.613 "compare_and_write": false, 00:03:56.613 "abort": true, 00:03:56.613 "seek_hole": false, 00:03:56.613 "seek_data": false, 00:03:56.613 "copy": true, 00:03:56.613 "nvme_iov_md": false 00:03:56.613 }, 00:03:56.613 "memory_domains": [ 00:03:56.613 { 00:03:56.613 "dma_device_id": "system", 00:03:56.613 "dma_device_type": 1 00:03:56.613 }, 00:03:56.613 { 00:03:56.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:56.613 "dma_device_type": 2 00:03:56.613 } 00:03:56.613 ], 00:03:56.613 "driver_specific": {} 00:03:56.613 } 00:03:56.613 ]' 00:03:56.613 10:32:24 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:56.613 10:32:24 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:56.613 10:32:24 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:56.613 10:32:24 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.613 10:32:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:56.613 10:32:24 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:56.613 10:32:24 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:56.613 10:32:24 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.613 10:32:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:56.613 10:32:24 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:56.613 10:32:24 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:56.613 10:32:24 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:56.613 10:32:24 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:56.613 00:03:56.613 real 0m0.140s 00:03:56.613 user 0m0.084s 00:03:56.613 sys 0m0.024s 00:03:56.613 10:32:24 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:56.613 10:32:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:56.613 ************************************ 00:03:56.613 END TEST rpc_plugins 00:03:56.613 ************************************ 00:03:56.613 10:32:24 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:56.613 10:32:24 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:56.613 10:32:24 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:56.613 10:32:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.872 ************************************ 00:03:56.872 START TEST rpc_trace_cmd_test 00:03:56.872 ************************************ 00:03:56.872 10:32:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:03:56.872 10:32:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:56.872 10:32:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:56.872 10:32:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.872 10:32:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:56.872 10:32:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:56.872 10:32:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:56.872 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3595562", 00:03:56.872 "tpoint_group_mask": "0x8", 00:03:56.872 "iscsi_conn": { 00:03:56.872 "mask": "0x2", 00:03:56.872 "tpoint_mask": "0x0" 00:03:56.872 }, 00:03:56.872 "scsi": { 00:03:56.872 "mask": "0x4", 00:03:56.872 "tpoint_mask": "0x0" 00:03:56.872 }, 00:03:56.872 "bdev": { 00:03:56.872 "mask": "0x8", 00:03:56.872 "tpoint_mask": "0xffffffffffffffff" 00:03:56.872 }, 00:03:56.872 "nvmf_rdma": { 00:03:56.872 "mask": "0x10", 00:03:56.872 "tpoint_mask": "0x0" 00:03:56.872 }, 00:03:56.872 "nvmf_tcp": { 00:03:56.872 "mask": "0x20", 00:03:56.872 "tpoint_mask": "0x0" 00:03:56.872 }, 00:03:56.872 "ftl": { 00:03:56.872 "mask": "0x40", 00:03:56.872 "tpoint_mask": "0x0" 00:03:56.872 }, 00:03:56.872 "blobfs": { 00:03:56.873 "mask": "0x80", 00:03:56.873 "tpoint_mask": "0x0" 00:03:56.873 }, 00:03:56.873 "dsa": { 00:03:56.873 "mask": "0x200", 00:03:56.873 "tpoint_mask": "0x0" 00:03:56.873 }, 00:03:56.873 "thread": { 00:03:56.873 "mask": "0x400", 00:03:56.873 "tpoint_mask": "0x0" 00:03:56.873 }, 00:03:56.873 "nvme_pcie": { 00:03:56.873 "mask": "0x800", 00:03:56.873 "tpoint_mask": "0x0" 00:03:56.873 }, 00:03:56.873 "iaa": { 00:03:56.873 "mask": "0x1000", 00:03:56.873 "tpoint_mask": "0x0" 00:03:56.873 }, 00:03:56.873 "nvme_tcp": { 00:03:56.873 "mask": "0x2000", 00:03:56.873 "tpoint_mask": "0x0" 00:03:56.873 }, 00:03:56.873 "bdev_nvme": { 00:03:56.873 "mask": "0x4000", 00:03:56.873 "tpoint_mask": "0x0" 00:03:56.873 }, 00:03:56.873 "sock": { 00:03:56.873 "mask": "0x8000", 00:03:56.873 "tpoint_mask": "0x0" 00:03:56.873 }, 00:03:56.873 "blob": { 00:03:56.873 "mask": "0x10000", 00:03:56.873 "tpoint_mask": "0x0" 00:03:56.873 }, 00:03:56.873 "bdev_raid": { 00:03:56.873 "mask": "0x20000", 00:03:56.873 "tpoint_mask": "0x0" 00:03:56.873 }, 00:03:56.873 "scheduler": { 00:03:56.873 "mask": "0x40000", 00:03:56.873 "tpoint_mask": "0x0" 00:03:56.873 } 00:03:56.873 }' 00:03:56.873 10:32:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:56.873 10:32:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:56.873 10:32:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:56.873 10:32:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:56.873 10:32:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:56.873 10:32:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:56.873 10:32:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:56.873 10:32:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:56.873 10:32:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:56.873 10:32:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:56.873 00:03:56.873 real 0m0.221s 00:03:56.873 user 0m0.180s 00:03:56.873 sys 0m0.033s 00:03:56.873 10:32:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:56.873 10:32:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:56.873 ************************************ 00:03:56.873 END TEST rpc_trace_cmd_test 00:03:56.873 ************************************ 00:03:57.133 10:32:24 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:57.133 10:32:24 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:57.133 10:32:24 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:57.133 10:32:24 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:57.133 10:32:24 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:57.133 10:32:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.133 ************************************ 00:03:57.133 START TEST rpc_daemon_integrity 00:03:57.133 ************************************ 00:03:57.133 10:32:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:03:57.133 10:32:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:57.133 10:32:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:57.133 10:32:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.133 10:32:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:57.133 10:32:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:57.133 10:32:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:57.133 10:32:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:57.133 10:32:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:57.133 10:32:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:57.133 10:32:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.133 10:32:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:57.133 10:32:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:57.133 10:32:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:57.133 10:32:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:57.133 10:32:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.133 10:32:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:57.133 10:32:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:57.133 { 00:03:57.133 "name": "Malloc2", 00:03:57.133 "aliases": [ 00:03:57.133 "6e82268f-4483-41f9-8e1f-e411e6289f39" 00:03:57.133 ], 00:03:57.133 "product_name": "Malloc disk", 00:03:57.133 "block_size": 512, 00:03:57.133 "num_blocks": 16384, 00:03:57.133 "uuid": "6e82268f-4483-41f9-8e1f-e411e6289f39", 00:03:57.133 "assigned_rate_limits": { 00:03:57.133 "rw_ios_per_sec": 0, 00:03:57.133 "rw_mbytes_per_sec": 0, 00:03:57.133 "r_mbytes_per_sec": 0, 00:03:57.133 "w_mbytes_per_sec": 0 00:03:57.133 }, 00:03:57.133 "claimed": false, 00:03:57.133 "zoned": false, 00:03:57.133 "supported_io_types": { 00:03:57.133 "read": true, 00:03:57.133 "write": true, 00:03:57.133 "unmap": true, 00:03:57.133 "flush": true, 00:03:57.133 "reset": true, 00:03:57.133 "nvme_admin": false, 00:03:57.133 "nvme_io": false, 00:03:57.133 "nvme_io_md": false, 00:03:57.133 "write_zeroes": true, 00:03:57.133 "zcopy": true, 00:03:57.133 "get_zone_info": false, 00:03:57.133 "zone_management": false, 00:03:57.133 "zone_append": false, 00:03:57.133 "compare": false, 00:03:57.133 "compare_and_write": false, 00:03:57.133 "abort": true, 00:03:57.133 "seek_hole": false, 00:03:57.133 "seek_data": false, 00:03:57.133 "copy": true, 00:03:57.133 "nvme_iov_md": false 00:03:57.133 }, 00:03:57.133 "memory_domains": [ 00:03:57.133 { 00:03:57.133 "dma_device_id": "system", 00:03:57.133 "dma_device_type": 1 00:03:57.133 }, 00:03:57.133 { 00:03:57.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:57.133 "dma_device_type": 2 00:03:57.133 } 00:03:57.133 ], 00:03:57.133 "driver_specific": {} 00:03:57.133 } 00:03:57.133 ]' 00:03:57.133 10:32:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:57.133 10:32:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:57.133 10:32:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:57.133 10:32:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:57.133 10:32:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.133 [2024-11-07 10:32:24.750287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:57.133 [2024-11-07 10:32:24.750317] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:57.133 [2024-11-07 10:32:24.750330] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1779a50 00:03:57.133 [2024-11-07 10:32:24.750338] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:57.133 [2024-11-07 10:32:24.751277] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:57.133 [2024-11-07 10:32:24.751301] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:57.133 Passthru0 00:03:57.133 10:32:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:57.133 10:32:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:57.133 10:32:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:57.133 10:32:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.133 10:32:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:57.133 10:32:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:57.133 { 00:03:57.133 "name": "Malloc2", 00:03:57.133 "aliases": [ 00:03:57.133 "6e82268f-4483-41f9-8e1f-e411e6289f39" 00:03:57.133 ], 00:03:57.133 "product_name": "Malloc disk", 00:03:57.133 "block_size": 512, 00:03:57.133 "num_blocks": 16384, 00:03:57.133 "uuid": "6e82268f-4483-41f9-8e1f-e411e6289f39", 00:03:57.133 "assigned_rate_limits": { 00:03:57.133 "rw_ios_per_sec": 0, 00:03:57.133 "rw_mbytes_per_sec": 0, 00:03:57.133 "r_mbytes_per_sec": 0, 00:03:57.133 "w_mbytes_per_sec": 0 00:03:57.133 }, 00:03:57.133 "claimed": true, 00:03:57.133 "claim_type": "exclusive_write", 00:03:57.133 "zoned": false, 00:03:57.133 "supported_io_types": { 00:03:57.133 "read": true, 00:03:57.133 "write": true, 00:03:57.133 "unmap": true, 00:03:57.133 "flush": true, 00:03:57.133 "reset": true, 00:03:57.133 "nvme_admin": false, 00:03:57.133 "nvme_io": false, 00:03:57.133 "nvme_io_md": false, 00:03:57.133 "write_zeroes": true, 00:03:57.133 "zcopy": true, 00:03:57.133 "get_zone_info": false, 00:03:57.133 "zone_management": false, 00:03:57.133 "zone_append": false, 00:03:57.133 "compare": false, 00:03:57.133 "compare_and_write": false, 00:03:57.133 "abort": true, 00:03:57.133 "seek_hole": false, 00:03:57.133 "seek_data": false, 00:03:57.133 "copy": true, 00:03:57.133 "nvme_iov_md": false 00:03:57.133 }, 00:03:57.133 "memory_domains": [ 00:03:57.133 { 00:03:57.133 "dma_device_id": "system", 00:03:57.133 "dma_device_type": 1 00:03:57.133 }, 00:03:57.133 { 00:03:57.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:57.133 "dma_device_type": 2 00:03:57.133 } 00:03:57.133 ], 00:03:57.133 "driver_specific": {} 00:03:57.133 }, 00:03:57.133 { 00:03:57.133 "name": "Passthru0", 00:03:57.133 "aliases": [ 00:03:57.133 "ec3dae99-0794-54d2-9595-202f89fdac57" 00:03:57.133 ], 00:03:57.133 "product_name": "passthru", 00:03:57.133 "block_size": 512, 00:03:57.133 "num_blocks": 16384, 00:03:57.133 "uuid": "ec3dae99-0794-54d2-9595-202f89fdac57", 00:03:57.133 "assigned_rate_limits": { 00:03:57.133 "rw_ios_per_sec": 0, 00:03:57.133 "rw_mbytes_per_sec": 0, 00:03:57.133 "r_mbytes_per_sec": 0, 00:03:57.133 "w_mbytes_per_sec": 0 00:03:57.133 }, 00:03:57.133 "claimed": false, 00:03:57.133 "zoned": false, 00:03:57.133 "supported_io_types": { 00:03:57.133 "read": true, 00:03:57.133 "write": true, 00:03:57.133 "unmap": true, 00:03:57.133 "flush": true, 00:03:57.133 "reset": true, 00:03:57.133 "nvme_admin": false, 00:03:57.133 "nvme_io": false, 00:03:57.133 "nvme_io_md": false, 00:03:57.133 "write_zeroes": true, 00:03:57.133 "zcopy": true, 00:03:57.133 "get_zone_info": false, 00:03:57.133 "zone_management": false, 00:03:57.133 "zone_append": false, 00:03:57.133 "compare": false, 00:03:57.133 "compare_and_write": false, 00:03:57.133 "abort": true, 00:03:57.133 "seek_hole": false, 00:03:57.134 "seek_data": false, 00:03:57.134 "copy": true, 00:03:57.134 "nvme_iov_md": false 00:03:57.134 }, 00:03:57.134 "memory_domains": [ 00:03:57.134 { 00:03:57.134 "dma_device_id": "system", 00:03:57.134 "dma_device_type": 1 00:03:57.134 }, 00:03:57.134 { 00:03:57.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:57.134 "dma_device_type": 2 00:03:57.134 } 00:03:57.134 ], 00:03:57.134 "driver_specific": { 00:03:57.134 "passthru": { 00:03:57.134 "name": "Passthru0", 00:03:57.134 "base_bdev_name": "Malloc2" 00:03:57.134 } 00:03:57.134 } 00:03:57.134 } 00:03:57.134 ]' 00:03:57.134 10:32:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:57.392 10:32:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:57.392 10:32:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:57.392 10:32:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:57.392 10:32:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.392 10:32:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:57.392 10:32:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:57.392 10:32:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:57.392 10:32:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.392 10:32:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:57.392 10:32:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:57.392 10:32:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:57.392 10:32:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.392 10:32:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:57.392 10:32:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:57.392 10:32:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:57.392 10:32:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:57.392 00:03:57.392 real 0m0.297s 00:03:57.392 user 0m0.179s 00:03:57.392 sys 0m0.055s 00:03:57.392 10:32:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:57.392 10:32:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.392 ************************************ 00:03:57.392 END TEST rpc_daemon_integrity 00:03:57.392 ************************************ 00:03:57.392 10:32:24 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:57.392 10:32:24 rpc -- rpc/rpc.sh@84 -- # killprocess 3595562 00:03:57.392 10:32:24 rpc -- common/autotest_common.sh@952 -- # '[' -z 3595562 ']' 00:03:57.392 10:32:24 rpc -- common/autotest_common.sh@956 -- # kill -0 3595562 00:03:57.392 10:32:24 rpc -- common/autotest_common.sh@957 -- # uname 00:03:57.392 10:32:24 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:57.392 10:32:24 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3595562 00:03:57.392 10:32:25 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:57.392 10:32:25 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:57.392 10:32:25 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3595562' 00:03:57.392 killing process with pid 3595562 00:03:57.392 10:32:25 rpc -- common/autotest_common.sh@971 -- # kill 3595562 00:03:57.392 10:32:25 rpc -- common/autotest_common.sh@976 -- # wait 3595562 00:03:57.652 00:03:57.652 real 0m2.215s 00:03:57.652 user 0m2.782s 00:03:57.652 sys 0m0.824s 00:03:57.652 10:32:25 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:57.652 10:32:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.652 ************************************ 00:03:57.652 END TEST rpc 00:03:57.652 ************************************ 00:03:57.911 10:32:25 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:57.912 10:32:25 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:57.912 10:32:25 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:57.912 10:32:25 -- common/autotest_common.sh@10 -- # set +x 00:03:57.912 ************************************ 00:03:57.912 START TEST skip_rpc 00:03:57.912 ************************************ 00:03:57.912 10:32:25 skip_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:57.912 * Looking for test storage... 00:03:57.912 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:03:57.912 10:32:25 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:57.912 10:32:25 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:03:57.912 10:32:25 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:57.912 10:32:25 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:57.912 10:32:25 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:57.912 10:32:25 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:57.912 10:32:25 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:57.912 10:32:25 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:57.912 10:32:25 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:57.912 10:32:25 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:57.912 10:32:25 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:57.912 10:32:25 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:57.912 10:32:25 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:57.912 10:32:25 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:57.912 10:32:25 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:57.912 10:32:25 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:57.912 10:32:25 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:57.912 10:32:25 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:57.912 10:32:25 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:57.912 10:32:25 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:57.912 10:32:25 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:57.912 10:32:25 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:57.912 10:32:25 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:57.912 10:32:25 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:57.912 10:32:25 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:57.912 10:32:25 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:57.912 10:32:25 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:57.912 10:32:25 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:57.912 10:32:25 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:57.912 10:32:25 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:57.912 10:32:25 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:57.912 10:32:25 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:57.912 10:32:25 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:57.912 10:32:25 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:57.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.912 --rc genhtml_branch_coverage=1 00:03:57.912 --rc genhtml_function_coverage=1 00:03:57.912 --rc genhtml_legend=1 00:03:57.912 --rc geninfo_all_blocks=1 00:03:57.912 --rc geninfo_unexecuted_blocks=1 00:03:57.912 00:03:57.912 ' 00:03:57.912 10:32:25 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:57.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.912 --rc genhtml_branch_coverage=1 00:03:57.912 --rc genhtml_function_coverage=1 00:03:57.912 --rc genhtml_legend=1 00:03:57.912 --rc geninfo_all_blocks=1 00:03:57.912 --rc geninfo_unexecuted_blocks=1 00:03:57.912 00:03:57.912 ' 00:03:57.912 10:32:25 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:57.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.912 --rc genhtml_branch_coverage=1 00:03:57.912 --rc genhtml_function_coverage=1 00:03:57.912 --rc genhtml_legend=1 00:03:57.912 --rc geninfo_all_blocks=1 00:03:57.912 --rc geninfo_unexecuted_blocks=1 00:03:57.912 00:03:57.912 ' 00:03:57.912 10:32:25 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:57.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.912 --rc genhtml_branch_coverage=1 00:03:57.912 --rc genhtml_function_coverage=1 00:03:57.912 --rc genhtml_legend=1 00:03:57.912 --rc geninfo_all_blocks=1 00:03:57.912 --rc geninfo_unexecuted_blocks=1 00:03:57.912 00:03:57.912 ' 00:03:57.912 10:32:25 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:03:57.912 10:32:25 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:03:57.912 10:32:25 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:57.912 10:32:25 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:57.912 10:32:25 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:57.912 10:32:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.176 ************************************ 00:03:58.176 START TEST skip_rpc 00:03:58.176 ************************************ 00:03:58.176 10:32:25 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:03:58.176 10:32:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3596027 00:03:58.176 10:32:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:58.176 10:32:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:58.176 10:32:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:58.176 [2024-11-07 10:32:25.665617] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:03:58.176 [2024-11-07 10:32:25.665658] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3596027 ] 00:03:58.176 [2024-11-07 10:32:25.738023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:58.176 [2024-11-07 10:32:25.776068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.549 10:32:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:03.549 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:03.549 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:03.549 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:03.549 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:03.549 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:03.549 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:03.549 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:03.549 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:03.549 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.549 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:03.549 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:03.549 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:03.549 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:03.549 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:03.549 10:32:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:03.549 10:32:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3596027 00:04:03.549 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 3596027 ']' 00:04:03.549 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 3596027 00:04:03.549 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:04:03.549 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:03.549 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3596027 00:04:03.549 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:03.549 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:03.549 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3596027' 00:04:03.549 killing process with pid 3596027 00:04:03.549 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 3596027 00:04:03.549 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 3596027 00:04:03.549 00:04:03.549 real 0m5.379s 00:04:03.549 user 0m5.152s 00:04:03.549 sys 0m0.279s 00:04:03.549 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:03.549 10:32:30 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.549 ************************************ 00:04:03.549 END TEST skip_rpc 00:04:03.549 ************************************ 00:04:03.549 10:32:31 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:03.549 10:32:31 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:03.549 10:32:31 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:03.549 10:32:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.549 ************************************ 00:04:03.549 START TEST skip_rpc_with_json 00:04:03.549 ************************************ 00:04:03.549 10:32:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:04:03.549 10:32:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:03.549 10:32:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3597110 00:04:03.549 10:32:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:03.549 10:32:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:03.549 10:32:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3597110 00:04:03.549 10:32:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 3597110 ']' 00:04:03.549 10:32:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:03.549 10:32:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:03.549 10:32:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:03.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:03.549 10:32:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:03.549 10:32:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:03.549 [2024-11-07 10:32:31.127178] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:04:03.549 [2024-11-07 10:32:31.127223] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3597110 ] 00:04:03.549 [2024-11-07 10:32:31.201074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.809 [2024-11-07 10:32:31.237098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.809 10:32:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:03.809 10:32:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:04:03.809 10:32:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:03.809 10:32:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:03.809 10:32:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:03.809 [2024-11-07 10:32:31.456574] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:03.809 request: 00:04:03.809 { 00:04:03.809 "trtype": "tcp", 00:04:03.809 "method": "nvmf_get_transports", 00:04:03.809 "req_id": 1 00:04:03.809 } 00:04:03.809 Got JSON-RPC error response 00:04:03.809 response: 00:04:03.809 { 00:04:03.809 "code": -19, 00:04:03.809 "message": "No such device" 00:04:03.809 } 00:04:03.809 10:32:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:03.809 10:32:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:03.809 10:32:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:03.809 10:32:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:03.809 [2024-11-07 10:32:31.468685] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:03.809 10:32:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:03.809 10:32:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:03.809 10:32:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:03.809 10:32:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:04.068 10:32:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:04.068 10:32:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:04.068 { 00:04:04.068 "subsystems": [ 00:04:04.068 { 00:04:04.068 "subsystem": "fsdev", 00:04:04.068 "config": [ 00:04:04.068 { 00:04:04.068 "method": "fsdev_set_opts", 00:04:04.068 "params": { 00:04:04.068 "fsdev_io_pool_size": 65535, 00:04:04.068 "fsdev_io_cache_size": 256 00:04:04.068 } 00:04:04.068 } 00:04:04.068 ] 00:04:04.068 }, 00:04:04.068 { 00:04:04.068 "subsystem": "keyring", 00:04:04.069 "config": [] 00:04:04.069 }, 00:04:04.069 { 00:04:04.069 "subsystem": "iobuf", 00:04:04.069 "config": [ 00:04:04.069 { 00:04:04.069 "method": "iobuf_set_options", 00:04:04.069 "params": { 00:04:04.069 "small_pool_count": 8192, 00:04:04.069 "large_pool_count": 1024, 00:04:04.069 "small_bufsize": 8192, 00:04:04.069 "large_bufsize": 135168, 00:04:04.069 "enable_numa": false 00:04:04.069 } 00:04:04.069 } 00:04:04.069 ] 00:04:04.069 }, 00:04:04.069 { 00:04:04.069 "subsystem": "sock", 00:04:04.069 "config": [ 00:04:04.069 { 00:04:04.069 "method": "sock_set_default_impl", 00:04:04.069 "params": { 00:04:04.069 "impl_name": "posix" 00:04:04.069 } 00:04:04.069 }, 00:04:04.069 { 00:04:04.069 "method": "sock_impl_set_options", 00:04:04.069 "params": { 00:04:04.069 "impl_name": "ssl", 00:04:04.069 "recv_buf_size": 4096, 00:04:04.069 "send_buf_size": 4096, 00:04:04.069 "enable_recv_pipe": true, 00:04:04.069 "enable_quickack": false, 00:04:04.069 "enable_placement_id": 0, 00:04:04.069 "enable_zerocopy_send_server": true, 00:04:04.069 "enable_zerocopy_send_client": false, 00:04:04.069 "zerocopy_threshold": 0, 00:04:04.069 "tls_version": 0, 00:04:04.069 "enable_ktls": false 00:04:04.069 } 00:04:04.069 }, 00:04:04.069 { 00:04:04.069 "method": "sock_impl_set_options", 00:04:04.069 "params": { 00:04:04.069 "impl_name": "posix", 00:04:04.069 "recv_buf_size": 2097152, 00:04:04.069 "send_buf_size": 2097152, 00:04:04.069 "enable_recv_pipe": true, 00:04:04.069 "enable_quickack": false, 00:04:04.069 "enable_placement_id": 0, 00:04:04.069 "enable_zerocopy_send_server": true, 00:04:04.069 "enable_zerocopy_send_client": false, 00:04:04.069 "zerocopy_threshold": 0, 00:04:04.069 "tls_version": 0, 00:04:04.069 "enable_ktls": false 00:04:04.069 } 00:04:04.069 } 00:04:04.069 ] 00:04:04.069 }, 00:04:04.069 { 00:04:04.069 "subsystem": "vmd", 00:04:04.069 "config": [] 00:04:04.069 }, 00:04:04.069 { 00:04:04.069 "subsystem": "accel", 00:04:04.069 "config": [ 00:04:04.069 { 00:04:04.069 "method": "accel_set_options", 00:04:04.069 "params": { 00:04:04.069 "small_cache_size": 128, 00:04:04.069 "large_cache_size": 16, 00:04:04.069 "task_count": 2048, 00:04:04.069 "sequence_count": 2048, 00:04:04.069 "buf_count": 2048 00:04:04.069 } 00:04:04.069 } 00:04:04.069 ] 00:04:04.069 }, 00:04:04.069 { 00:04:04.069 "subsystem": "bdev", 00:04:04.069 "config": [ 00:04:04.069 { 00:04:04.069 "method": "bdev_set_options", 00:04:04.069 "params": { 00:04:04.069 "bdev_io_pool_size": 65535, 00:04:04.069 "bdev_io_cache_size": 256, 00:04:04.069 "bdev_auto_examine": true, 00:04:04.069 "iobuf_small_cache_size": 128, 00:04:04.069 "iobuf_large_cache_size": 16 00:04:04.069 } 00:04:04.069 }, 00:04:04.069 { 00:04:04.069 "method": "bdev_raid_set_options", 00:04:04.069 "params": { 00:04:04.069 "process_window_size_kb": 1024, 00:04:04.069 "process_max_bandwidth_mb_sec": 0 00:04:04.069 } 00:04:04.069 }, 00:04:04.069 { 00:04:04.069 "method": "bdev_iscsi_set_options", 00:04:04.069 "params": { 00:04:04.069 "timeout_sec": 30 00:04:04.069 } 00:04:04.069 }, 00:04:04.069 { 00:04:04.069 "method": "bdev_nvme_set_options", 00:04:04.069 "params": { 00:04:04.069 "action_on_timeout": "none", 00:04:04.069 "timeout_us": 0, 00:04:04.069 "timeout_admin_us": 0, 00:04:04.069 "keep_alive_timeout_ms": 10000, 00:04:04.069 "arbitration_burst": 0, 00:04:04.069 "low_priority_weight": 0, 00:04:04.069 "medium_priority_weight": 0, 00:04:04.069 "high_priority_weight": 0, 00:04:04.069 "nvme_adminq_poll_period_us": 10000, 00:04:04.069 "nvme_ioq_poll_period_us": 0, 00:04:04.069 "io_queue_requests": 0, 00:04:04.069 "delay_cmd_submit": true, 00:04:04.069 "transport_retry_count": 4, 00:04:04.069 "bdev_retry_count": 3, 00:04:04.069 "transport_ack_timeout": 0, 00:04:04.069 "ctrlr_loss_timeout_sec": 0, 00:04:04.069 "reconnect_delay_sec": 0, 00:04:04.069 "fast_io_fail_timeout_sec": 0, 00:04:04.069 "disable_auto_failback": false, 00:04:04.069 "generate_uuids": false, 00:04:04.069 "transport_tos": 0, 00:04:04.069 "nvme_error_stat": false, 00:04:04.069 "rdma_srq_size": 0, 00:04:04.069 "io_path_stat": false, 00:04:04.069 "allow_accel_sequence": false, 00:04:04.069 "rdma_max_cq_size": 0, 00:04:04.069 "rdma_cm_event_timeout_ms": 0, 00:04:04.069 "dhchap_digests": [ 00:04:04.069 "sha256", 00:04:04.069 "sha384", 00:04:04.069 "sha512" 00:04:04.069 ], 00:04:04.069 "dhchap_dhgroups": [ 00:04:04.069 "null", 00:04:04.069 "ffdhe2048", 00:04:04.069 "ffdhe3072", 00:04:04.069 "ffdhe4096", 00:04:04.069 "ffdhe6144", 00:04:04.069 "ffdhe8192" 00:04:04.069 ] 00:04:04.069 } 00:04:04.069 }, 00:04:04.069 { 00:04:04.069 "method": "bdev_nvme_set_hotplug", 00:04:04.069 "params": { 00:04:04.069 "period_us": 100000, 00:04:04.069 "enable": false 00:04:04.069 } 00:04:04.069 }, 00:04:04.069 { 00:04:04.069 "method": "bdev_wait_for_examine" 00:04:04.069 } 00:04:04.069 ] 00:04:04.069 }, 00:04:04.069 { 00:04:04.069 "subsystem": "scsi", 00:04:04.069 "config": null 00:04:04.069 }, 00:04:04.069 { 00:04:04.069 "subsystem": "scheduler", 00:04:04.069 "config": [ 00:04:04.069 { 00:04:04.069 "method": "framework_set_scheduler", 00:04:04.069 "params": { 00:04:04.069 "name": "static" 00:04:04.069 } 00:04:04.069 } 00:04:04.069 ] 00:04:04.069 }, 00:04:04.069 { 00:04:04.069 "subsystem": "vhost_scsi", 00:04:04.069 "config": [] 00:04:04.069 }, 00:04:04.069 { 00:04:04.069 "subsystem": "vhost_blk", 00:04:04.069 "config": [] 00:04:04.069 }, 00:04:04.069 { 00:04:04.069 "subsystem": "ublk", 00:04:04.069 "config": [] 00:04:04.069 }, 00:04:04.069 { 00:04:04.069 "subsystem": "nbd", 00:04:04.069 "config": [] 00:04:04.069 }, 00:04:04.069 { 00:04:04.069 "subsystem": "nvmf", 00:04:04.069 "config": [ 00:04:04.069 { 00:04:04.069 "method": "nvmf_set_config", 00:04:04.069 "params": { 00:04:04.069 "discovery_filter": "match_any", 00:04:04.069 "admin_cmd_passthru": { 00:04:04.069 "identify_ctrlr": false 00:04:04.069 }, 00:04:04.069 "dhchap_digests": [ 00:04:04.069 "sha256", 00:04:04.069 "sha384", 00:04:04.069 "sha512" 00:04:04.069 ], 00:04:04.069 "dhchap_dhgroups": [ 00:04:04.069 "null", 00:04:04.069 "ffdhe2048", 00:04:04.069 "ffdhe3072", 00:04:04.069 "ffdhe4096", 00:04:04.069 "ffdhe6144", 00:04:04.069 "ffdhe8192" 00:04:04.069 ] 00:04:04.069 } 00:04:04.069 }, 00:04:04.069 { 00:04:04.069 "method": "nvmf_set_max_subsystems", 00:04:04.069 "params": { 00:04:04.069 "max_subsystems": 1024 00:04:04.069 } 00:04:04.069 }, 00:04:04.069 { 00:04:04.069 "method": "nvmf_set_crdt", 00:04:04.069 "params": { 00:04:04.069 "crdt1": 0, 00:04:04.069 "crdt2": 0, 00:04:04.069 "crdt3": 0 00:04:04.069 } 00:04:04.069 }, 00:04:04.069 { 00:04:04.069 "method": "nvmf_create_transport", 00:04:04.069 "params": { 00:04:04.069 "trtype": "TCP", 00:04:04.069 "max_queue_depth": 128, 00:04:04.069 "max_io_qpairs_per_ctrlr": 127, 00:04:04.069 "in_capsule_data_size": 4096, 00:04:04.069 "max_io_size": 131072, 00:04:04.069 "io_unit_size": 131072, 00:04:04.069 "max_aq_depth": 128, 00:04:04.069 "num_shared_buffers": 511, 00:04:04.069 "buf_cache_size": 4294967295, 00:04:04.069 "dif_insert_or_strip": false, 00:04:04.069 "zcopy": false, 00:04:04.069 "c2h_success": true, 00:04:04.069 "sock_priority": 0, 00:04:04.069 "abort_timeout_sec": 1, 00:04:04.069 "ack_timeout": 0, 00:04:04.069 "data_wr_pool_size": 0 00:04:04.069 } 00:04:04.069 } 00:04:04.069 ] 00:04:04.069 }, 00:04:04.069 { 00:04:04.069 "subsystem": "iscsi", 00:04:04.069 "config": [ 00:04:04.069 { 00:04:04.069 "method": "iscsi_set_options", 00:04:04.069 "params": { 00:04:04.069 "node_base": "iqn.2016-06.io.spdk", 00:04:04.069 "max_sessions": 128, 00:04:04.069 "max_connections_per_session": 2, 00:04:04.069 "max_queue_depth": 64, 00:04:04.069 "default_time2wait": 2, 00:04:04.069 "default_time2retain": 20, 00:04:04.069 "first_burst_length": 8192, 00:04:04.069 "immediate_data": true, 00:04:04.069 "allow_duplicated_isid": false, 00:04:04.069 "error_recovery_level": 0, 00:04:04.069 "nop_timeout": 60, 00:04:04.069 "nop_in_interval": 30, 00:04:04.069 "disable_chap": false, 00:04:04.069 "require_chap": false, 00:04:04.069 "mutual_chap": false, 00:04:04.069 "chap_group": 0, 00:04:04.069 "max_large_datain_per_connection": 64, 00:04:04.069 "max_r2t_per_connection": 4, 00:04:04.069 "pdu_pool_size": 36864, 00:04:04.069 "immediate_data_pool_size": 16384, 00:04:04.069 "data_out_pool_size": 2048 00:04:04.069 } 00:04:04.069 } 00:04:04.069 ] 00:04:04.069 } 00:04:04.069 ] 00:04:04.069 } 00:04:04.069 10:32:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:04.069 10:32:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3597110 00:04:04.069 10:32:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 3597110 ']' 00:04:04.070 10:32:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 3597110 00:04:04.070 10:32:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:04.070 10:32:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:04.070 10:32:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3597110 00:04:04.070 10:32:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:04.070 10:32:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:04.070 10:32:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3597110' 00:04:04.070 killing process with pid 3597110 00:04:04.070 10:32:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 3597110 00:04:04.070 10:32:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 3597110 00:04:04.637 10:32:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3597135 00:04:04.637 10:32:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:04.637 10:32:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:09.908 10:32:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3597135 00:04:09.908 10:32:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 3597135 ']' 00:04:09.908 10:32:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 3597135 00:04:09.908 10:32:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:09.908 10:32:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:09.908 10:32:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3597135 00:04:09.908 10:32:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:09.908 10:32:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:09.908 10:32:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3597135' 00:04:09.908 killing process with pid 3597135 00:04:09.909 10:32:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 3597135 00:04:09.909 10:32:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 3597135 00:04:09.909 10:32:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:04:09.909 10:32:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:04:09.909 00:04:09.909 real 0m6.308s 00:04:09.909 user 0m5.990s 00:04:09.909 sys 0m0.634s 00:04:09.909 10:32:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:09.909 10:32:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:09.909 ************************************ 00:04:09.909 END TEST skip_rpc_with_json 00:04:09.909 ************************************ 00:04:09.909 10:32:37 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:09.909 10:32:37 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:09.909 10:32:37 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:09.909 10:32:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.909 ************************************ 00:04:09.909 START TEST skip_rpc_with_delay 00:04:09.909 ************************************ 00:04:09.909 10:32:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:04:09.909 10:32:37 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:09.909 10:32:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:09.909 10:32:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:09.909 10:32:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:09.909 10:32:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:09.909 10:32:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:09.909 10:32:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:09.909 10:32:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:09.909 10:32:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:09.909 10:32:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:09.909 10:32:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:09.909 10:32:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:09.909 [2024-11-07 10:32:37.525360] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:09.909 10:32:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:09.909 10:32:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:09.909 10:32:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:09.909 10:32:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:09.909 00:04:09.909 real 0m0.075s 00:04:09.909 user 0m0.042s 00:04:09.909 sys 0m0.032s 00:04:09.909 10:32:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:09.909 10:32:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:09.909 ************************************ 00:04:09.909 END TEST skip_rpc_with_delay 00:04:09.909 ************************************ 00:04:10.168 10:32:37 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:10.168 10:32:37 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:10.168 10:32:37 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:10.168 10:32:37 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:10.168 10:32:37 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:10.168 10:32:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.168 ************************************ 00:04:10.168 START TEST exit_on_failed_rpc_init 00:04:10.168 ************************************ 00:04:10.168 10:32:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:04:10.168 10:32:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3598249 00:04:10.168 10:32:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3598249 00:04:10.168 10:32:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:10.168 10:32:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 3598249 ']' 00:04:10.168 10:32:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:10.168 10:32:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:10.168 10:32:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:10.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:10.168 10:32:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:10.168 10:32:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:10.168 [2024-11-07 10:32:37.678688] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:04:10.168 [2024-11-07 10:32:37.678733] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3598249 ] 00:04:10.168 [2024-11-07 10:32:37.752053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.168 [2024-11-07 10:32:37.792498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.428 10:32:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:10.428 10:32:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:04:10.428 10:32:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:10.428 10:32:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:10.428 10:32:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:10.428 10:32:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:10.428 10:32:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:10.428 10:32:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:10.428 10:32:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:10.428 10:32:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:10.428 10:32:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:10.428 10:32:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:10.428 10:32:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:10.428 10:32:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:10.428 10:32:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:10.428 [2024-11-07 10:32:38.057371] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:04:10.428 [2024-11-07 10:32:38.057423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3598255 ] 00:04:10.688 [2024-11-07 10:32:38.132560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.688 [2024-11-07 10:32:38.171461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:10.688 [2024-11-07 10:32:38.171524] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:10.688 [2024-11-07 10:32:38.171536] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:10.688 [2024-11-07 10:32:38.171544] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:10.688 10:32:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:10.688 10:32:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:10.688 10:32:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:10.688 10:32:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:10.688 10:32:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:10.688 10:32:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:10.688 10:32:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:10.688 10:32:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3598249 00:04:10.688 10:32:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 3598249 ']' 00:04:10.688 10:32:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 3598249 00:04:10.688 10:32:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:04:10.688 10:32:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:10.688 10:32:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3598249 00:04:10.688 10:32:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:10.688 10:32:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:10.688 10:32:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3598249' 00:04:10.688 killing process with pid 3598249 00:04:10.688 10:32:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 3598249 00:04:10.688 10:32:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 3598249 00:04:10.948 00:04:10.948 real 0m0.948s 00:04:10.948 user 0m0.999s 00:04:10.948 sys 0m0.401s 00:04:10.948 10:32:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:10.948 10:32:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:10.948 ************************************ 00:04:10.948 END TEST exit_on_failed_rpc_init 00:04:10.948 ************************************ 00:04:11.207 10:32:38 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:11.207 00:04:11.207 real 0m13.234s 00:04:11.207 user 0m12.390s 00:04:11.207 sys 0m1.707s 00:04:11.207 10:32:38 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:11.207 10:32:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.207 ************************************ 00:04:11.208 END TEST skip_rpc 00:04:11.208 ************************************ 00:04:11.208 10:32:38 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:11.208 10:32:38 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:11.208 10:32:38 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:11.208 10:32:38 -- common/autotest_common.sh@10 -- # set +x 00:04:11.208 ************************************ 00:04:11.208 START TEST rpc_client 00:04:11.208 ************************************ 00:04:11.208 10:32:38 rpc_client -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:11.208 * Looking for test storage... 00:04:11.208 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:04:11.208 10:32:38 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:11.208 10:32:38 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:04:11.208 10:32:38 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:11.208 10:32:38 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:11.208 10:32:38 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:11.208 10:32:38 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:11.208 10:32:38 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:11.208 10:32:38 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:11.208 10:32:38 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:11.208 10:32:38 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:11.208 10:32:38 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:11.208 10:32:38 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:11.208 10:32:38 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:11.208 10:32:38 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:11.208 10:32:38 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:11.208 10:32:38 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:11.208 10:32:38 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:11.208 10:32:38 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:11.208 10:32:38 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:11.208 10:32:38 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:11.208 10:32:38 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:11.208 10:32:38 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:11.208 10:32:38 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:11.208 10:32:38 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:11.208 10:32:38 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:11.208 10:32:38 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:11.208 10:32:38 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:11.208 10:32:38 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:11.467 10:32:38 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:11.467 10:32:38 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:11.467 10:32:38 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:11.467 10:32:38 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:11.468 10:32:38 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:11.468 10:32:38 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:11.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.468 --rc genhtml_branch_coverage=1 00:04:11.468 --rc genhtml_function_coverage=1 00:04:11.468 --rc genhtml_legend=1 00:04:11.468 --rc geninfo_all_blocks=1 00:04:11.468 --rc geninfo_unexecuted_blocks=1 00:04:11.468 00:04:11.468 ' 00:04:11.468 10:32:38 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:11.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.468 --rc genhtml_branch_coverage=1 00:04:11.468 --rc genhtml_function_coverage=1 00:04:11.468 --rc genhtml_legend=1 00:04:11.468 --rc geninfo_all_blocks=1 00:04:11.468 --rc geninfo_unexecuted_blocks=1 00:04:11.468 00:04:11.468 ' 00:04:11.468 10:32:38 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:11.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.468 --rc genhtml_branch_coverage=1 00:04:11.468 --rc genhtml_function_coverage=1 00:04:11.468 --rc genhtml_legend=1 00:04:11.468 --rc geninfo_all_blocks=1 00:04:11.468 --rc geninfo_unexecuted_blocks=1 00:04:11.468 00:04:11.468 ' 00:04:11.468 10:32:38 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:11.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.468 --rc genhtml_branch_coverage=1 00:04:11.468 --rc genhtml_function_coverage=1 00:04:11.468 --rc genhtml_legend=1 00:04:11.468 --rc geninfo_all_blocks=1 00:04:11.468 --rc geninfo_unexecuted_blocks=1 00:04:11.468 00:04:11.468 ' 00:04:11.468 10:32:38 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:11.468 OK 00:04:11.468 10:32:38 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:11.468 00:04:11.468 real 0m0.205s 00:04:11.468 user 0m0.107s 00:04:11.468 sys 0m0.113s 00:04:11.468 10:32:38 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:11.468 10:32:38 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:11.468 ************************************ 00:04:11.468 END TEST rpc_client 00:04:11.468 ************************************ 00:04:11.468 10:32:38 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:04:11.468 10:32:38 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:11.468 10:32:38 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:11.468 10:32:38 -- common/autotest_common.sh@10 -- # set +x 00:04:11.468 ************************************ 00:04:11.468 START TEST json_config 00:04:11.468 ************************************ 00:04:11.468 10:32:38 json_config -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:04:11.468 10:32:39 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:11.468 10:32:39 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:11.468 10:32:39 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:04:11.468 10:32:39 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:11.468 10:32:39 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:11.468 10:32:39 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:11.468 10:32:39 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:11.468 10:32:39 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:11.468 10:32:39 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:11.468 10:32:39 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:11.468 10:32:39 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:11.728 10:32:39 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:11.728 10:32:39 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:11.729 10:32:39 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:11.729 10:32:39 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:11.729 10:32:39 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:11.729 10:32:39 json_config -- scripts/common.sh@345 -- # : 1 00:04:11.729 10:32:39 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:11.729 10:32:39 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:11.729 10:32:39 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:11.729 10:32:39 json_config -- scripts/common.sh@353 -- # local d=1 00:04:11.729 10:32:39 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:11.729 10:32:39 json_config -- scripts/common.sh@355 -- # echo 1 00:04:11.729 10:32:39 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:11.729 10:32:39 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:11.729 10:32:39 json_config -- scripts/common.sh@353 -- # local d=2 00:04:11.729 10:32:39 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:11.729 10:32:39 json_config -- scripts/common.sh@355 -- # echo 2 00:04:11.729 10:32:39 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:11.729 10:32:39 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:11.729 10:32:39 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:11.729 10:32:39 json_config -- scripts/common.sh@368 -- # return 0 00:04:11.729 10:32:39 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:11.729 10:32:39 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:11.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.729 --rc genhtml_branch_coverage=1 00:04:11.729 --rc genhtml_function_coverage=1 00:04:11.729 --rc genhtml_legend=1 00:04:11.729 --rc geninfo_all_blocks=1 00:04:11.729 --rc geninfo_unexecuted_blocks=1 00:04:11.729 00:04:11.729 ' 00:04:11.729 10:32:39 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:11.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.729 --rc genhtml_branch_coverage=1 00:04:11.729 --rc genhtml_function_coverage=1 00:04:11.729 --rc genhtml_legend=1 00:04:11.729 --rc geninfo_all_blocks=1 00:04:11.729 --rc geninfo_unexecuted_blocks=1 00:04:11.729 00:04:11.729 ' 00:04:11.729 10:32:39 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:11.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.729 --rc genhtml_branch_coverage=1 00:04:11.729 --rc genhtml_function_coverage=1 00:04:11.729 --rc genhtml_legend=1 00:04:11.729 --rc geninfo_all_blocks=1 00:04:11.729 --rc geninfo_unexecuted_blocks=1 00:04:11.729 00:04:11.729 ' 00:04:11.729 10:32:39 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:11.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.729 --rc genhtml_branch_coverage=1 00:04:11.729 --rc genhtml_function_coverage=1 00:04:11.729 --rc genhtml_legend=1 00:04:11.729 --rc geninfo_all_blocks=1 00:04:11.729 --rc geninfo_unexecuted_blocks=1 00:04:11.729 00:04:11.729 ' 00:04:11.729 10:32:39 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:04:11.729 10:32:39 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:11.729 10:32:39 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:11.729 10:32:39 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:11.729 10:32:39 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:11.729 10:32:39 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:11.729 10:32:39 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:11.729 10:32:39 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:11.729 10:32:39 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:11.729 10:32:39 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:11.729 10:32:39 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:11.729 10:32:39 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:11.729 10:32:39 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:04:11.729 10:32:39 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:04:11.729 10:32:39 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:11.729 10:32:39 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:11.729 10:32:39 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:11.729 10:32:39 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:11.729 10:32:39 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:04:11.729 10:32:39 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:11.729 10:32:39 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:11.729 10:32:39 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:11.729 10:32:39 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:11.729 10:32:39 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.729 10:32:39 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.729 10:32:39 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.729 10:32:39 json_config -- paths/export.sh@5 -- # export PATH 00:04:11.729 10:32:39 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.729 10:32:39 json_config -- nvmf/common.sh@51 -- # : 0 00:04:11.729 10:32:39 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:11.729 10:32:39 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:11.729 10:32:39 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:11.729 10:32:39 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:11.729 10:32:39 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:11.729 10:32:39 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:11.729 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:11.729 10:32:39 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:11.729 10:32:39 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:11.729 10:32:39 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:11.729 10:32:39 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:04:11.729 10:32:39 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:11.729 10:32:39 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:11.729 10:32:39 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:11.729 10:32:39 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:11.729 10:32:39 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:11.729 10:32:39 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:11.729 10:32:39 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:11.729 10:32:39 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:11.729 10:32:39 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:11.729 10:32:39 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:11.729 10:32:39 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:04:11.729 10:32:39 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:11.729 10:32:39 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:11.729 10:32:39 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:11.729 10:32:39 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:11.729 INFO: JSON configuration test init 00:04:11.729 10:32:39 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:11.729 10:32:39 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:11.729 10:32:39 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:11.729 10:32:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.729 10:32:39 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:11.729 10:32:39 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:11.729 10:32:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.729 10:32:39 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:11.729 10:32:39 json_config -- json_config/common.sh@9 -- # local app=target 00:04:11.729 10:32:39 json_config -- json_config/common.sh@10 -- # shift 00:04:11.729 10:32:39 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:11.729 10:32:39 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:11.729 10:32:39 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:11.729 10:32:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:11.729 10:32:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:11.729 10:32:39 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3598645 00:04:11.729 10:32:39 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:11.729 Waiting for target to run... 00:04:11.729 10:32:39 json_config -- json_config/common.sh@25 -- # waitforlisten 3598645 /var/tmp/spdk_tgt.sock 00:04:11.729 10:32:39 json_config -- common/autotest_common.sh@833 -- # '[' -z 3598645 ']' 00:04:11.730 10:32:39 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:11.730 10:32:39 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:11.730 10:32:39 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:11.730 10:32:39 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:11.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:11.730 10:32:39 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:11.730 10:32:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.730 [2024-11-07 10:32:39.260001] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:04:11.730 [2024-11-07 10:32:39.260051] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3598645 ] 00:04:12.298 [2024-11-07 10:32:39.703969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:12.298 [2024-11-07 10:32:39.762389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.557 10:32:40 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:12.557 10:32:40 json_config -- common/autotest_common.sh@866 -- # return 0 00:04:12.557 10:32:40 json_config -- json_config/common.sh@26 -- # echo '' 00:04:12.557 00:04:12.557 10:32:40 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:12.557 10:32:40 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:12.557 10:32:40 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:12.557 10:32:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:12.557 10:32:40 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:12.557 10:32:40 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:12.557 10:32:40 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:12.557 10:32:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:12.557 10:32:40 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:12.557 10:32:40 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:12.557 10:32:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:15.849 10:32:43 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:15.849 10:32:43 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:15.849 10:32:43 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:15.849 10:32:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.849 10:32:43 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:15.849 10:32:43 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:15.849 10:32:43 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:15.849 10:32:43 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:15.849 10:32:43 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:15.849 10:32:43 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:15.849 10:32:43 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:15.849 10:32:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:15.849 10:32:43 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:15.849 10:32:43 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:15.849 10:32:43 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:15.849 10:32:43 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:15.849 10:32:43 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:15.849 10:32:43 json_config -- json_config/json_config.sh@54 -- # sort 00:04:15.849 10:32:43 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:15.849 10:32:43 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:15.849 10:32:43 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:15.849 10:32:43 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:15.849 10:32:43 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:15.849 10:32:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.849 10:32:43 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:15.849 10:32:43 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:15.849 10:32:43 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:15.849 10:32:43 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:15.849 10:32:43 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:15.849 10:32:43 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:15.849 10:32:43 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:15.849 10:32:43 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:15.849 10:32:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.849 10:32:43 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:15.849 10:32:43 json_config -- json_config/json_config.sh@240 -- # [[ rdma == \r\d\m\a ]] 00:04:15.849 10:32:43 json_config -- json_config/json_config.sh@241 -- # TEST_TRANSPORT=rdma 00:04:15.849 10:32:43 json_config -- json_config/json_config.sh@241 -- # nvmftestinit 00:04:15.849 10:32:43 json_config -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:04:15.849 10:32:43 json_config -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:15.849 10:32:43 json_config -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:15.849 10:32:43 json_config -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:15.849 10:32:43 json_config -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:15.849 10:32:43 json_config -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:15.849 10:32:43 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:04:15.849 10:32:43 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:15.849 10:32:43 json_config -- nvmf/common.sh@442 -- # [[ phy-fallback != virt ]] 00:04:15.849 10:32:43 json_config -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:15.849 10:32:43 json_config -- nvmf/common.sh@309 -- # xtrace_disable 00:04:15.849 10:32:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@315 -- # pci_devs=() 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@315 -- # local -a pci_devs 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@316 -- # pci_net_devs=() 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@317 -- # pci_drivers=() 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@317 -- # local -A pci_drivers 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@319 -- # net_devs=() 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@319 -- # local -ga net_devs 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@320 -- # e810=() 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@320 -- # local -ga e810 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@321 -- # x722=() 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@321 -- # local -ga x722 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@322 -- # mlx=() 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@322 -- # local -ga mlx 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:04:23.973 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:04:23.973 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:04:23.973 Found net devices under 0000:d9:00.0: mlx_0_0 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:04:23.973 Found net devices under 0000:d9:00.1: mlx_0_1 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@442 -- # is_hw=yes 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@448 -- # rdma_device_init 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@62 -- # uname 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:04:23.973 10:32:50 json_config -- nvmf/common.sh@66 -- # modprobe ib_cm 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@67 -- # modprobe ib_core 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@68 -- # modprobe ib_umad 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@70 -- # modprobe iw_cm 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@530 -- # allocate_nic_ips 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@77 -- # get_rdma_if_list 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@109 -- # continue 2 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@109 -- # continue 2 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:04:23.974 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:04:23.974 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:04:23.974 altname enp217s0f0np0 00:04:23.974 altname ens818f0np0 00:04:23.974 inet 192.168.100.8/24 scope global mlx_0_0 00:04:23.974 valid_lft forever preferred_lft forever 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:04:23.974 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:04:23.974 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:04:23.974 altname enp217s0f1np1 00:04:23.974 altname ens818f1np1 00:04:23.974 inet 192.168.100.9/24 scope global mlx_0_1 00:04:23.974 valid_lft forever preferred_lft forever 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@450 -- # return 0 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@90 -- # get_rdma_if_list 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@109 -- # continue 2 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@109 -- # continue 2 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:04:23.974 192.168.100.9' 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:04:23.974 192.168.100.9' 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@485 -- # head -n 1 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@486 -- # head -n 1 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:04:23.974 192.168.100.9' 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@486 -- # tail -n +2 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:04:23.974 10:32:50 json_config -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:04:23.974 10:32:50 json_config -- json_config/json_config.sh@244 -- # [[ -z 192.168.100.8 ]] 00:04:23.974 10:32:50 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:23.974 10:32:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:23.974 MallocForNvmf0 00:04:23.974 10:32:50 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:23.974 10:32:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:23.974 MallocForNvmf1 00:04:23.974 10:32:50 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:04:23.974 10:32:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:04:23.974 [2024-11-07 10:32:50.868895] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:04:23.974 [2024-11-07 10:32:50.898926] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x103cb00/0x104e100) succeed. 00:04:23.974 [2024-11-07 10:32:50.911143] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x103fd50/0x10ce140) succeed. 00:04:23.974 10:32:50 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:23.974 10:32:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:23.974 10:32:51 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:23.974 10:32:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:23.974 10:32:51 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:23.974 10:32:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:23.974 10:32:51 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:04:23.974 10:32:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:04:24.234 [2024-11-07 10:32:51.671631] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:04:24.234 10:32:51 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:24.234 10:32:51 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:24.234 10:32:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.234 10:32:51 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:24.234 10:32:51 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:24.234 10:32:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.234 10:32:51 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:24.234 10:32:51 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:24.234 10:32:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:24.493 MallocBdevForConfigChangeCheck 00:04:24.493 10:32:51 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:24.493 10:32:51 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:24.493 10:32:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.493 10:32:52 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:24.493 10:32:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:24.753 10:32:52 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:24.753 INFO: shutting down applications... 00:04:24.753 10:32:52 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:24.753 10:32:52 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:24.753 10:32:52 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:24.753 10:32:52 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:27.288 Calling clear_iscsi_subsystem 00:04:27.288 Calling clear_nvmf_subsystem 00:04:27.288 Calling clear_nbd_subsystem 00:04:27.288 Calling clear_ublk_subsystem 00:04:27.288 Calling clear_vhost_blk_subsystem 00:04:27.288 Calling clear_vhost_scsi_subsystem 00:04:27.288 Calling clear_bdev_subsystem 00:04:27.288 10:32:54 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:04:27.288 10:32:54 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:27.288 10:32:54 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:27.288 10:32:54 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:27.288 10:32:54 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:27.288 10:32:54 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:27.548 10:32:55 json_config -- json_config/json_config.sh@352 -- # break 00:04:27.548 10:32:55 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:27.548 10:32:55 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:27.548 10:32:55 json_config -- json_config/common.sh@31 -- # local app=target 00:04:27.549 10:32:55 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:27.549 10:32:55 json_config -- json_config/common.sh@35 -- # [[ -n 3598645 ]] 00:04:27.549 10:32:55 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3598645 00:04:27.549 10:32:55 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:27.549 10:32:55 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:27.549 10:32:55 json_config -- json_config/common.sh@41 -- # kill -0 3598645 00:04:27.549 10:32:55 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:28.118 10:32:55 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:28.118 10:32:55 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:28.118 10:32:55 json_config -- json_config/common.sh@41 -- # kill -0 3598645 00:04:28.118 10:32:55 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:28.118 10:32:55 json_config -- json_config/common.sh@43 -- # break 00:04:28.118 10:32:55 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:28.118 10:32:55 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:28.118 SPDK target shutdown done 00:04:28.118 10:32:55 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:28.118 INFO: relaunching applications... 00:04:28.118 10:32:55 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:28.118 10:32:55 json_config -- json_config/common.sh@9 -- # local app=target 00:04:28.118 10:32:55 json_config -- json_config/common.sh@10 -- # shift 00:04:28.118 10:32:55 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:28.118 10:32:55 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:28.118 10:32:55 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:28.118 10:32:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:28.118 10:32:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:28.118 10:32:55 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3603740 00:04:28.118 10:32:55 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:28.118 Waiting for target to run... 00:04:28.118 10:32:55 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:28.118 10:32:55 json_config -- json_config/common.sh@25 -- # waitforlisten 3603740 /var/tmp/spdk_tgt.sock 00:04:28.118 10:32:55 json_config -- common/autotest_common.sh@833 -- # '[' -z 3603740 ']' 00:04:28.118 10:32:55 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:28.118 10:32:55 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:28.118 10:32:55 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:28.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:28.118 10:32:55 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:28.118 10:32:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.118 [2024-11-07 10:32:55.775395] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:04:28.118 [2024-11-07 10:32:55.775455] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3603740 ] 00:04:28.687 [2024-11-07 10:32:56.220793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.687 [2024-11-07 10:32:56.278029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.976 [2024-11-07 10:32:59.347790] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x254c270/0x256af40) succeed. 00:04:31.976 [2024-11-07 10:32:59.358412] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x254f4c0/0x25d5f40) succeed. 00:04:31.976 [2024-11-07 10:32:59.411937] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:04:32.543 10:32:59 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:32.543 10:32:59 json_config -- common/autotest_common.sh@866 -- # return 0 00:04:32.543 10:32:59 json_config -- json_config/common.sh@26 -- # echo '' 00:04:32.543 00:04:32.543 10:32:59 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:32.543 10:32:59 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:32.543 INFO: Checking if target configuration is the same... 00:04:32.543 10:32:59 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:32.543 10:32:59 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:32.543 10:32:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:32.543 + '[' 2 -ne 2 ']' 00:04:32.543 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:32.543 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:04:32.543 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:32.543 +++ basename /dev/fd/62 00:04:32.544 ++ mktemp /tmp/62.XXX 00:04:32.544 + tmp_file_1=/tmp/62.Fp3 00:04:32.544 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:32.544 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:32.544 + tmp_file_2=/tmp/spdk_tgt_config.json.njz 00:04:32.544 + ret=0 00:04:32.544 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:32.802 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:32.802 + diff -u /tmp/62.Fp3 /tmp/spdk_tgt_config.json.njz 00:04:32.802 + echo 'INFO: JSON config files are the same' 00:04:32.802 INFO: JSON config files are the same 00:04:32.802 + rm /tmp/62.Fp3 /tmp/spdk_tgt_config.json.njz 00:04:32.802 + exit 0 00:04:32.802 10:33:00 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:32.802 10:33:00 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:32.802 INFO: changing configuration and checking if this can be detected... 00:04:32.802 10:33:00 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:32.802 10:33:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:33.061 10:33:00 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:33.061 10:33:00 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:33.061 10:33:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:33.061 + '[' 2 -ne 2 ']' 00:04:33.061 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:33.062 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:04:33.062 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:33.062 +++ basename /dev/fd/62 00:04:33.062 ++ mktemp /tmp/62.XXX 00:04:33.062 + tmp_file_1=/tmp/62.RRs 00:04:33.062 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:33.062 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:33.062 + tmp_file_2=/tmp/spdk_tgt_config.json.VdH 00:04:33.062 + ret=0 00:04:33.062 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:33.321 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:33.321 + diff -u /tmp/62.RRs /tmp/spdk_tgt_config.json.VdH 00:04:33.321 + ret=1 00:04:33.321 + echo '=== Start of file: /tmp/62.RRs ===' 00:04:33.321 + cat /tmp/62.RRs 00:04:33.321 + echo '=== End of file: /tmp/62.RRs ===' 00:04:33.321 + echo '' 00:04:33.321 + echo '=== Start of file: /tmp/spdk_tgt_config.json.VdH ===' 00:04:33.321 + cat /tmp/spdk_tgt_config.json.VdH 00:04:33.321 + echo '=== End of file: /tmp/spdk_tgt_config.json.VdH ===' 00:04:33.321 + echo '' 00:04:33.321 + rm /tmp/62.RRs /tmp/spdk_tgt_config.json.VdH 00:04:33.321 + exit 1 00:04:33.321 10:33:00 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:33.321 INFO: configuration change detected. 00:04:33.321 10:33:00 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:33.321 10:33:00 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:33.321 10:33:00 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:33.321 10:33:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.321 10:33:00 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:33.321 10:33:00 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:33.321 10:33:00 json_config -- json_config/json_config.sh@324 -- # [[ -n 3603740 ]] 00:04:33.321 10:33:00 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:33.321 10:33:00 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:33.321 10:33:00 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:33.321 10:33:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.321 10:33:00 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:33.321 10:33:00 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:33.321 10:33:00 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:33.321 10:33:00 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:33.321 10:33:00 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:33.321 10:33:00 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:33.321 10:33:00 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:33.321 10:33:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.580 10:33:01 json_config -- json_config/json_config.sh@330 -- # killprocess 3603740 00:04:33.580 10:33:01 json_config -- common/autotest_common.sh@952 -- # '[' -z 3603740 ']' 00:04:33.580 10:33:01 json_config -- common/autotest_common.sh@956 -- # kill -0 3603740 00:04:33.580 10:33:01 json_config -- common/autotest_common.sh@957 -- # uname 00:04:33.580 10:33:01 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:33.580 10:33:01 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3603740 00:04:33.580 10:33:01 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:33.580 10:33:01 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:33.580 10:33:01 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3603740' 00:04:33.580 killing process with pid 3603740 00:04:33.580 10:33:01 json_config -- common/autotest_common.sh@971 -- # kill 3603740 00:04:33.580 10:33:01 json_config -- common/autotest_common.sh@976 -- # wait 3603740 00:04:36.117 10:33:03 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:36.117 10:33:03 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:36.117 10:33:03 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:36.117 10:33:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.117 10:33:03 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:36.117 10:33:03 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:36.117 INFO: Success 00:04:36.117 10:33:03 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:04:36.117 10:33:03 json_config -- nvmf/common.sh@516 -- # nvmfcleanup 00:04:36.117 10:33:03 json_config -- nvmf/common.sh@121 -- # sync 00:04:36.117 10:33:03 json_config -- nvmf/common.sh@123 -- # '[' '' == tcp ']' 00:04:36.117 10:33:03 json_config -- nvmf/common.sh@123 -- # '[' '' == rdma ']' 00:04:36.117 10:33:03 json_config -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:04:36.118 10:33:03 json_config -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:04:36.118 10:33:03 json_config -- nvmf/common.sh@523 -- # [[ '' == \t\c\p ]] 00:04:36.118 00:04:36.118 real 0m24.663s 00:04:36.118 user 0m26.576s 00:04:36.118 sys 0m8.213s 00:04:36.118 10:33:03 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:36.118 10:33:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.118 ************************************ 00:04:36.118 END TEST json_config 00:04:36.118 ************************************ 00:04:36.118 10:33:03 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:36.118 10:33:03 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:36.118 10:33:03 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:36.118 10:33:03 -- common/autotest_common.sh@10 -- # set +x 00:04:36.118 ************************************ 00:04:36.118 START TEST json_config_extra_key 00:04:36.118 ************************************ 00:04:36.118 10:33:03 json_config_extra_key -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:36.377 10:33:03 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:36.377 10:33:03 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:04:36.377 10:33:03 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:36.377 10:33:03 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:36.377 10:33:03 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.377 10:33:03 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.377 10:33:03 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.377 10:33:03 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.377 10:33:03 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.377 10:33:03 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.377 10:33:03 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.377 10:33:03 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.377 10:33:03 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.377 10:33:03 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.377 10:33:03 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.377 10:33:03 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:36.377 10:33:03 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:36.377 10:33:03 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.377 10:33:03 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.377 10:33:03 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:36.377 10:33:03 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:36.378 10:33:03 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.378 10:33:03 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:36.378 10:33:03 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.378 10:33:03 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:36.378 10:33:03 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:36.378 10:33:03 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.378 10:33:03 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:36.378 10:33:03 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.378 10:33:03 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.378 10:33:03 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.378 10:33:03 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:36.378 10:33:03 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.378 10:33:03 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:36.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.378 --rc genhtml_branch_coverage=1 00:04:36.378 --rc genhtml_function_coverage=1 00:04:36.378 --rc genhtml_legend=1 00:04:36.378 --rc geninfo_all_blocks=1 00:04:36.378 --rc geninfo_unexecuted_blocks=1 00:04:36.378 00:04:36.378 ' 00:04:36.378 10:33:03 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:36.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.378 --rc genhtml_branch_coverage=1 00:04:36.378 --rc genhtml_function_coverage=1 00:04:36.378 --rc genhtml_legend=1 00:04:36.378 --rc geninfo_all_blocks=1 00:04:36.378 --rc geninfo_unexecuted_blocks=1 00:04:36.378 00:04:36.378 ' 00:04:36.378 10:33:03 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:36.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.378 --rc genhtml_branch_coverage=1 00:04:36.378 --rc genhtml_function_coverage=1 00:04:36.378 --rc genhtml_legend=1 00:04:36.378 --rc geninfo_all_blocks=1 00:04:36.378 --rc geninfo_unexecuted_blocks=1 00:04:36.378 00:04:36.378 ' 00:04:36.378 10:33:03 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:36.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.378 --rc genhtml_branch_coverage=1 00:04:36.378 --rc genhtml_function_coverage=1 00:04:36.378 --rc genhtml_legend=1 00:04:36.378 --rc geninfo_all_blocks=1 00:04:36.378 --rc geninfo_unexecuted_blocks=1 00:04:36.378 00:04:36.378 ' 00:04:36.378 10:33:03 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:04:36.378 10:33:03 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:36.378 10:33:03 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:36.378 10:33:03 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:36.378 10:33:03 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:36.378 10:33:03 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:36.378 10:33:03 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:36.378 10:33:03 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:36.378 10:33:03 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:36.378 10:33:03 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:36.378 10:33:03 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:36.378 10:33:03 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:36.378 10:33:03 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:04:36.378 10:33:03 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:04:36.378 10:33:03 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:36.378 10:33:03 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:36.378 10:33:03 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:36.378 10:33:03 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:36.378 10:33:03 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:04:36.378 10:33:03 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:36.378 10:33:03 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:36.378 10:33:03 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:36.378 10:33:03 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:36.378 10:33:03 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.378 10:33:03 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.378 10:33:03 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.378 10:33:03 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:36.378 10:33:03 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.378 10:33:03 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:36.378 10:33:03 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:36.378 10:33:03 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:36.378 10:33:03 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:36.378 10:33:03 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:36.378 10:33:03 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:36.378 10:33:03 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:36.378 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:36.378 10:33:03 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:36.378 10:33:03 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:36.378 10:33:03 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:36.378 10:33:03 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:04:36.378 10:33:03 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:36.378 10:33:03 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:36.378 10:33:03 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:36.378 10:33:03 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:36.378 10:33:03 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:36.378 10:33:03 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:36.378 10:33:03 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:36.378 10:33:03 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:36.378 10:33:03 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:36.378 10:33:03 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:36.378 INFO: launching applications... 00:04:36.378 10:33:03 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:04:36.378 10:33:03 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:36.378 10:33:03 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:36.378 10:33:03 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:36.378 10:33:03 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:36.378 10:33:03 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:36.378 10:33:03 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:36.378 10:33:03 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:36.378 10:33:03 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3605345 00:04:36.378 10:33:03 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:36.378 Waiting for target to run... 00:04:36.378 10:33:03 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3605345 /var/tmp/spdk_tgt.sock 00:04:36.378 10:33:03 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 3605345 ']' 00:04:36.378 10:33:03 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:36.378 10:33:03 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:04:36.378 10:33:03 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:36.378 10:33:03 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:36.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:36.378 10:33:03 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:36.378 10:33:03 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:36.378 [2024-11-07 10:33:03.993901] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:04:36.378 [2024-11-07 10:33:03.993951] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3605345 ] 00:04:36.947 [2024-11-07 10:33:04.446506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.947 [2024-11-07 10:33:04.493103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.206 10:33:04 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:37.206 10:33:04 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:04:37.206 10:33:04 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:37.206 00:04:37.206 10:33:04 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:37.206 INFO: shutting down applications... 00:04:37.206 10:33:04 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:37.206 10:33:04 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:37.206 10:33:04 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:37.206 10:33:04 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3605345 ]] 00:04:37.206 10:33:04 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3605345 00:04:37.206 10:33:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:37.206 10:33:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:37.206 10:33:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3605345 00:04:37.206 10:33:04 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:37.776 10:33:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:37.776 10:33:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:37.776 10:33:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3605345 00:04:37.776 10:33:05 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:37.776 10:33:05 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:37.776 10:33:05 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:37.776 10:33:05 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:37.776 SPDK target shutdown done 00:04:37.776 10:33:05 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:37.776 Success 00:04:37.776 00:04:37.776 real 0m1.591s 00:04:37.776 user 0m1.161s 00:04:37.776 sys 0m0.609s 00:04:37.776 10:33:05 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:37.776 10:33:05 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:37.776 ************************************ 00:04:37.776 END TEST json_config_extra_key 00:04:37.776 ************************************ 00:04:37.776 10:33:05 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:37.776 10:33:05 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:37.776 10:33:05 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:37.776 10:33:05 -- common/autotest_common.sh@10 -- # set +x 00:04:37.776 ************************************ 00:04:37.776 START TEST alias_rpc 00:04:37.776 ************************************ 00:04:37.776 10:33:05 alias_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:38.071 * Looking for test storage... 00:04:38.071 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:04:38.071 10:33:05 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:38.071 10:33:05 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:38.071 10:33:05 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:38.071 10:33:05 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:38.071 10:33:05 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:38.071 10:33:05 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:38.071 10:33:05 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:38.071 10:33:05 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.071 10:33:05 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:38.071 10:33:05 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:38.071 10:33:05 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:38.071 10:33:05 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:38.071 10:33:05 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:38.071 10:33:05 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:38.071 10:33:05 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:38.071 10:33:05 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:38.071 10:33:05 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:38.071 10:33:05 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:38.071 10:33:05 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.071 10:33:05 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:38.071 10:33:05 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:38.071 10:33:05 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.071 10:33:05 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:38.071 10:33:05 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:38.071 10:33:05 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:38.071 10:33:05 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:38.071 10:33:05 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.071 10:33:05 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:38.071 10:33:05 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:38.071 10:33:05 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:38.071 10:33:05 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:38.071 10:33:05 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:38.071 10:33:05 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.071 10:33:05 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:38.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.071 --rc genhtml_branch_coverage=1 00:04:38.071 --rc genhtml_function_coverage=1 00:04:38.071 --rc genhtml_legend=1 00:04:38.071 --rc geninfo_all_blocks=1 00:04:38.071 --rc geninfo_unexecuted_blocks=1 00:04:38.071 00:04:38.071 ' 00:04:38.071 10:33:05 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:38.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.071 --rc genhtml_branch_coverage=1 00:04:38.071 --rc genhtml_function_coverage=1 00:04:38.071 --rc genhtml_legend=1 00:04:38.071 --rc geninfo_all_blocks=1 00:04:38.071 --rc geninfo_unexecuted_blocks=1 00:04:38.071 00:04:38.071 ' 00:04:38.071 10:33:05 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:38.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.071 --rc genhtml_branch_coverage=1 00:04:38.071 --rc genhtml_function_coverage=1 00:04:38.071 --rc genhtml_legend=1 00:04:38.071 --rc geninfo_all_blocks=1 00:04:38.071 --rc geninfo_unexecuted_blocks=1 00:04:38.071 00:04:38.071 ' 00:04:38.071 10:33:05 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:38.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.071 --rc genhtml_branch_coverage=1 00:04:38.071 --rc genhtml_function_coverage=1 00:04:38.071 --rc genhtml_legend=1 00:04:38.071 --rc geninfo_all_blocks=1 00:04:38.071 --rc geninfo_unexecuted_blocks=1 00:04:38.071 00:04:38.071 ' 00:04:38.071 10:33:05 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:38.071 10:33:05 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3606058 00:04:38.071 10:33:05 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3606058 00:04:38.071 10:33:05 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:38.071 10:33:05 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 3606058 ']' 00:04:38.071 10:33:05 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.071 10:33:05 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:38.071 10:33:05 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.071 10:33:05 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:38.071 10:33:05 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.071 [2024-11-07 10:33:05.656470] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:04:38.071 [2024-11-07 10:33:05.656534] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3606058 ] 00:04:38.071 [2024-11-07 10:33:05.731246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.357 [2024-11-07 10:33:05.772600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.357 10:33:05 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:38.357 10:33:05 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:38.357 10:33:05 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:38.617 10:33:06 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3606058 00:04:38.617 10:33:06 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 3606058 ']' 00:04:38.617 10:33:06 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 3606058 00:04:38.617 10:33:06 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:04:38.617 10:33:06 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:38.617 10:33:06 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3606058 00:04:38.617 10:33:06 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:38.617 10:33:06 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:38.617 10:33:06 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3606058' 00:04:38.617 killing process with pid 3606058 00:04:38.617 10:33:06 alias_rpc -- common/autotest_common.sh@971 -- # kill 3606058 00:04:38.617 10:33:06 alias_rpc -- common/autotest_common.sh@976 -- # wait 3606058 00:04:39.186 00:04:39.186 real 0m1.150s 00:04:39.186 user 0m1.134s 00:04:39.186 sys 0m0.453s 00:04:39.186 10:33:06 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:39.186 10:33:06 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.186 ************************************ 00:04:39.186 END TEST alias_rpc 00:04:39.186 ************************************ 00:04:39.186 10:33:06 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:39.186 10:33:06 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:39.186 10:33:06 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:39.187 10:33:06 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:39.187 10:33:06 -- common/autotest_common.sh@10 -- # set +x 00:04:39.187 ************************************ 00:04:39.187 START TEST spdkcli_tcp 00:04:39.187 ************************************ 00:04:39.187 10:33:06 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:39.187 * Looking for test storage... 00:04:39.187 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:04:39.187 10:33:06 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:39.187 10:33:06 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:39.187 10:33:06 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:04:39.187 10:33:06 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:39.187 10:33:06 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.187 10:33:06 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.187 10:33:06 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.187 10:33:06 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.187 10:33:06 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.187 10:33:06 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.187 10:33:06 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.187 10:33:06 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.187 10:33:06 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.187 10:33:06 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.187 10:33:06 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.187 10:33:06 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:39.187 10:33:06 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:39.187 10:33:06 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.187 10:33:06 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.187 10:33:06 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:39.187 10:33:06 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:39.187 10:33:06 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.187 10:33:06 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:39.187 10:33:06 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.187 10:33:06 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:39.187 10:33:06 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:39.187 10:33:06 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.187 10:33:06 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:39.187 10:33:06 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.187 10:33:06 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.187 10:33:06 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.187 10:33:06 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:39.187 10:33:06 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.187 10:33:06 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:39.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.187 --rc genhtml_branch_coverage=1 00:04:39.187 --rc genhtml_function_coverage=1 00:04:39.187 --rc genhtml_legend=1 00:04:39.187 --rc geninfo_all_blocks=1 00:04:39.187 --rc geninfo_unexecuted_blocks=1 00:04:39.187 00:04:39.187 ' 00:04:39.187 10:33:06 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:39.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.187 --rc genhtml_branch_coverage=1 00:04:39.187 --rc genhtml_function_coverage=1 00:04:39.187 --rc genhtml_legend=1 00:04:39.187 --rc geninfo_all_blocks=1 00:04:39.187 --rc geninfo_unexecuted_blocks=1 00:04:39.187 00:04:39.187 ' 00:04:39.187 10:33:06 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:39.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.187 --rc genhtml_branch_coverage=1 00:04:39.187 --rc genhtml_function_coverage=1 00:04:39.187 --rc genhtml_legend=1 00:04:39.187 --rc geninfo_all_blocks=1 00:04:39.187 --rc geninfo_unexecuted_blocks=1 00:04:39.187 00:04:39.187 ' 00:04:39.187 10:33:06 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:39.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.187 --rc genhtml_branch_coverage=1 00:04:39.187 --rc genhtml_function_coverage=1 00:04:39.187 --rc genhtml_legend=1 00:04:39.187 --rc geninfo_all_blocks=1 00:04:39.187 --rc geninfo_unexecuted_blocks=1 00:04:39.187 00:04:39.187 ' 00:04:39.187 10:33:06 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:04:39.187 10:33:06 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:39.187 10:33:06 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:04:39.187 10:33:06 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:39.187 10:33:06 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:39.187 10:33:06 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:39.187 10:33:06 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:39.187 10:33:06 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:39.187 10:33:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:39.187 10:33:06 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3606412 00:04:39.187 10:33:06 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3606412 00:04:39.187 10:33:06 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:39.187 10:33:06 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 3606412 ']' 00:04:39.187 10:33:06 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.187 10:33:06 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:39.187 10:33:06 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.187 10:33:06 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:39.187 10:33:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:39.447 [2024-11-07 10:33:06.874501] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:04:39.447 [2024-11-07 10:33:06.874559] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3606412 ] 00:04:39.447 [2024-11-07 10:33:06.946925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:39.447 [2024-11-07 10:33:06.986347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:39.447 [2024-11-07 10:33:06.986350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.707 10:33:07 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:39.707 10:33:07 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:04:39.707 10:33:07 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3606417 00:04:39.707 10:33:07 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:39.707 10:33:07 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:39.707 [ 00:04:39.707 "bdev_malloc_delete", 00:04:39.707 "bdev_malloc_create", 00:04:39.707 "bdev_null_resize", 00:04:39.707 "bdev_null_delete", 00:04:39.707 "bdev_null_create", 00:04:39.707 "bdev_nvme_cuse_unregister", 00:04:39.707 "bdev_nvme_cuse_register", 00:04:39.707 "bdev_opal_new_user", 00:04:39.707 "bdev_opal_set_lock_state", 00:04:39.707 "bdev_opal_delete", 00:04:39.707 "bdev_opal_get_info", 00:04:39.707 "bdev_opal_create", 00:04:39.707 "bdev_nvme_opal_revert", 00:04:39.707 "bdev_nvme_opal_init", 00:04:39.707 "bdev_nvme_send_cmd", 00:04:39.707 "bdev_nvme_set_keys", 00:04:39.707 "bdev_nvme_get_path_iostat", 00:04:39.707 "bdev_nvme_get_mdns_discovery_info", 00:04:39.707 "bdev_nvme_stop_mdns_discovery", 00:04:39.707 "bdev_nvme_start_mdns_discovery", 00:04:39.707 "bdev_nvme_set_multipath_policy", 00:04:39.707 "bdev_nvme_set_preferred_path", 00:04:39.707 "bdev_nvme_get_io_paths", 00:04:39.707 "bdev_nvme_remove_error_injection", 00:04:39.707 "bdev_nvme_add_error_injection", 00:04:39.707 "bdev_nvme_get_discovery_info", 00:04:39.707 "bdev_nvme_stop_discovery", 00:04:39.707 "bdev_nvme_start_discovery", 00:04:39.707 "bdev_nvme_get_controller_health_info", 00:04:39.707 "bdev_nvme_disable_controller", 00:04:39.707 "bdev_nvme_enable_controller", 00:04:39.707 "bdev_nvme_reset_controller", 00:04:39.707 "bdev_nvme_get_transport_statistics", 00:04:39.707 "bdev_nvme_apply_firmware", 00:04:39.707 "bdev_nvme_detach_controller", 00:04:39.707 "bdev_nvme_get_controllers", 00:04:39.707 "bdev_nvme_attach_controller", 00:04:39.707 "bdev_nvme_set_hotplug", 00:04:39.707 "bdev_nvme_set_options", 00:04:39.707 "bdev_passthru_delete", 00:04:39.707 "bdev_passthru_create", 00:04:39.707 "bdev_lvol_set_parent_bdev", 00:04:39.707 "bdev_lvol_set_parent", 00:04:39.707 "bdev_lvol_check_shallow_copy", 00:04:39.707 "bdev_lvol_start_shallow_copy", 00:04:39.707 "bdev_lvol_grow_lvstore", 00:04:39.707 "bdev_lvol_get_lvols", 00:04:39.707 "bdev_lvol_get_lvstores", 00:04:39.707 "bdev_lvol_delete", 00:04:39.707 "bdev_lvol_set_read_only", 00:04:39.707 "bdev_lvol_resize", 00:04:39.707 "bdev_lvol_decouple_parent", 00:04:39.707 "bdev_lvol_inflate", 00:04:39.707 "bdev_lvol_rename", 00:04:39.707 "bdev_lvol_clone_bdev", 00:04:39.707 "bdev_lvol_clone", 00:04:39.707 "bdev_lvol_snapshot", 00:04:39.707 "bdev_lvol_create", 00:04:39.707 "bdev_lvol_delete_lvstore", 00:04:39.707 "bdev_lvol_rename_lvstore", 00:04:39.707 "bdev_lvol_create_lvstore", 00:04:39.707 "bdev_raid_set_options", 00:04:39.707 "bdev_raid_remove_base_bdev", 00:04:39.707 "bdev_raid_add_base_bdev", 00:04:39.707 "bdev_raid_delete", 00:04:39.707 "bdev_raid_create", 00:04:39.707 "bdev_raid_get_bdevs", 00:04:39.707 "bdev_error_inject_error", 00:04:39.707 "bdev_error_delete", 00:04:39.707 "bdev_error_create", 00:04:39.707 "bdev_split_delete", 00:04:39.707 "bdev_split_create", 00:04:39.707 "bdev_delay_delete", 00:04:39.707 "bdev_delay_create", 00:04:39.707 "bdev_delay_update_latency", 00:04:39.707 "bdev_zone_block_delete", 00:04:39.707 "bdev_zone_block_create", 00:04:39.707 "blobfs_create", 00:04:39.707 "blobfs_detect", 00:04:39.707 "blobfs_set_cache_size", 00:04:39.707 "bdev_aio_delete", 00:04:39.707 "bdev_aio_rescan", 00:04:39.707 "bdev_aio_create", 00:04:39.707 "bdev_ftl_set_property", 00:04:39.707 "bdev_ftl_get_properties", 00:04:39.707 "bdev_ftl_get_stats", 00:04:39.707 "bdev_ftl_unmap", 00:04:39.707 "bdev_ftl_unload", 00:04:39.707 "bdev_ftl_delete", 00:04:39.707 "bdev_ftl_load", 00:04:39.707 "bdev_ftl_create", 00:04:39.707 "bdev_virtio_attach_controller", 00:04:39.707 "bdev_virtio_scsi_get_devices", 00:04:39.707 "bdev_virtio_detach_controller", 00:04:39.707 "bdev_virtio_blk_set_hotplug", 00:04:39.707 "bdev_iscsi_delete", 00:04:39.707 "bdev_iscsi_create", 00:04:39.707 "bdev_iscsi_set_options", 00:04:39.707 "accel_error_inject_error", 00:04:39.707 "ioat_scan_accel_module", 00:04:39.707 "dsa_scan_accel_module", 00:04:39.707 "iaa_scan_accel_module", 00:04:39.707 "keyring_file_remove_key", 00:04:39.707 "keyring_file_add_key", 00:04:39.707 "keyring_linux_set_options", 00:04:39.707 "fsdev_aio_delete", 00:04:39.707 "fsdev_aio_create", 00:04:39.707 "iscsi_get_histogram", 00:04:39.707 "iscsi_enable_histogram", 00:04:39.707 "iscsi_set_options", 00:04:39.707 "iscsi_get_auth_groups", 00:04:39.707 "iscsi_auth_group_remove_secret", 00:04:39.707 "iscsi_auth_group_add_secret", 00:04:39.707 "iscsi_delete_auth_group", 00:04:39.707 "iscsi_create_auth_group", 00:04:39.707 "iscsi_set_discovery_auth", 00:04:39.707 "iscsi_get_options", 00:04:39.707 "iscsi_target_node_request_logout", 00:04:39.708 "iscsi_target_node_set_redirect", 00:04:39.708 "iscsi_target_node_set_auth", 00:04:39.708 "iscsi_target_node_add_lun", 00:04:39.708 "iscsi_get_stats", 00:04:39.708 "iscsi_get_connections", 00:04:39.708 "iscsi_portal_group_set_auth", 00:04:39.708 "iscsi_start_portal_group", 00:04:39.708 "iscsi_delete_portal_group", 00:04:39.708 "iscsi_create_portal_group", 00:04:39.708 "iscsi_get_portal_groups", 00:04:39.708 "iscsi_delete_target_node", 00:04:39.708 "iscsi_target_node_remove_pg_ig_maps", 00:04:39.708 "iscsi_target_node_add_pg_ig_maps", 00:04:39.708 "iscsi_create_target_node", 00:04:39.708 "iscsi_get_target_nodes", 00:04:39.708 "iscsi_delete_initiator_group", 00:04:39.708 "iscsi_initiator_group_remove_initiators", 00:04:39.708 "iscsi_initiator_group_add_initiators", 00:04:39.708 "iscsi_create_initiator_group", 00:04:39.708 "iscsi_get_initiator_groups", 00:04:39.708 "nvmf_set_crdt", 00:04:39.708 "nvmf_set_config", 00:04:39.708 "nvmf_set_max_subsystems", 00:04:39.708 "nvmf_stop_mdns_prr", 00:04:39.708 "nvmf_publish_mdns_prr", 00:04:39.708 "nvmf_subsystem_get_listeners", 00:04:39.708 "nvmf_subsystem_get_qpairs", 00:04:39.708 "nvmf_subsystem_get_controllers", 00:04:39.708 "nvmf_get_stats", 00:04:39.708 "nvmf_get_transports", 00:04:39.708 "nvmf_create_transport", 00:04:39.708 "nvmf_get_targets", 00:04:39.708 "nvmf_delete_target", 00:04:39.708 "nvmf_create_target", 00:04:39.708 "nvmf_subsystem_allow_any_host", 00:04:39.708 "nvmf_subsystem_set_keys", 00:04:39.708 "nvmf_subsystem_remove_host", 00:04:39.708 "nvmf_subsystem_add_host", 00:04:39.708 "nvmf_ns_remove_host", 00:04:39.708 "nvmf_ns_add_host", 00:04:39.708 "nvmf_subsystem_remove_ns", 00:04:39.708 "nvmf_subsystem_set_ns_ana_group", 00:04:39.708 "nvmf_subsystem_add_ns", 00:04:39.708 "nvmf_subsystem_listener_set_ana_state", 00:04:39.708 "nvmf_discovery_get_referrals", 00:04:39.708 "nvmf_discovery_remove_referral", 00:04:39.708 "nvmf_discovery_add_referral", 00:04:39.708 "nvmf_subsystem_remove_listener", 00:04:39.708 "nvmf_subsystem_add_listener", 00:04:39.708 "nvmf_delete_subsystem", 00:04:39.708 "nvmf_create_subsystem", 00:04:39.708 "nvmf_get_subsystems", 00:04:39.708 "env_dpdk_get_mem_stats", 00:04:39.708 "nbd_get_disks", 00:04:39.708 "nbd_stop_disk", 00:04:39.708 "nbd_start_disk", 00:04:39.708 "ublk_recover_disk", 00:04:39.708 "ublk_get_disks", 00:04:39.708 "ublk_stop_disk", 00:04:39.708 "ublk_start_disk", 00:04:39.708 "ublk_destroy_target", 00:04:39.708 "ublk_create_target", 00:04:39.708 "virtio_blk_create_transport", 00:04:39.708 "virtio_blk_get_transports", 00:04:39.708 "vhost_controller_set_coalescing", 00:04:39.708 "vhost_get_controllers", 00:04:39.708 "vhost_delete_controller", 00:04:39.708 "vhost_create_blk_controller", 00:04:39.708 "vhost_scsi_controller_remove_target", 00:04:39.708 "vhost_scsi_controller_add_target", 00:04:39.708 "vhost_start_scsi_controller", 00:04:39.708 "vhost_create_scsi_controller", 00:04:39.708 "thread_set_cpumask", 00:04:39.708 "scheduler_set_options", 00:04:39.708 "framework_get_governor", 00:04:39.708 "framework_get_scheduler", 00:04:39.708 "framework_set_scheduler", 00:04:39.708 "framework_get_reactors", 00:04:39.708 "thread_get_io_channels", 00:04:39.708 "thread_get_pollers", 00:04:39.708 "thread_get_stats", 00:04:39.708 "framework_monitor_context_switch", 00:04:39.708 "spdk_kill_instance", 00:04:39.708 "log_enable_timestamps", 00:04:39.708 "log_get_flags", 00:04:39.708 "log_clear_flag", 00:04:39.708 "log_set_flag", 00:04:39.708 "log_get_level", 00:04:39.708 "log_set_level", 00:04:39.708 "log_get_print_level", 00:04:39.708 "log_set_print_level", 00:04:39.708 "framework_enable_cpumask_locks", 00:04:39.708 "framework_disable_cpumask_locks", 00:04:39.708 "framework_wait_init", 00:04:39.708 "framework_start_init", 00:04:39.708 "scsi_get_devices", 00:04:39.708 "bdev_get_histogram", 00:04:39.708 "bdev_enable_histogram", 00:04:39.708 "bdev_set_qos_limit", 00:04:39.708 "bdev_set_qd_sampling_period", 00:04:39.708 "bdev_get_bdevs", 00:04:39.708 "bdev_reset_iostat", 00:04:39.708 "bdev_get_iostat", 00:04:39.708 "bdev_examine", 00:04:39.708 "bdev_wait_for_examine", 00:04:39.708 "bdev_set_options", 00:04:39.708 "accel_get_stats", 00:04:39.708 "accel_set_options", 00:04:39.708 "accel_set_driver", 00:04:39.708 "accel_crypto_key_destroy", 00:04:39.708 "accel_crypto_keys_get", 00:04:39.708 "accel_crypto_key_create", 00:04:39.708 "accel_assign_opc", 00:04:39.708 "accel_get_module_info", 00:04:39.708 "accel_get_opc_assignments", 00:04:39.708 "vmd_rescan", 00:04:39.708 "vmd_remove_device", 00:04:39.708 "vmd_enable", 00:04:39.708 "sock_get_default_impl", 00:04:39.708 "sock_set_default_impl", 00:04:39.708 "sock_impl_set_options", 00:04:39.708 "sock_impl_get_options", 00:04:39.708 "iobuf_get_stats", 00:04:39.708 "iobuf_set_options", 00:04:39.708 "keyring_get_keys", 00:04:39.708 "framework_get_pci_devices", 00:04:39.708 "framework_get_config", 00:04:39.708 "framework_get_subsystems", 00:04:39.708 "fsdev_set_opts", 00:04:39.708 "fsdev_get_opts", 00:04:39.708 "trace_get_info", 00:04:39.708 "trace_get_tpoint_group_mask", 00:04:39.708 "trace_disable_tpoint_group", 00:04:39.708 "trace_enable_tpoint_group", 00:04:39.708 "trace_clear_tpoint_mask", 00:04:39.708 "trace_set_tpoint_mask", 00:04:39.708 "notify_get_notifications", 00:04:39.708 "notify_get_types", 00:04:39.708 "spdk_get_version", 00:04:39.708 "rpc_get_methods" 00:04:39.708 ] 00:04:39.968 10:33:07 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:39.968 10:33:07 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:39.968 10:33:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:39.968 10:33:07 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:39.968 10:33:07 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3606412 00:04:39.968 10:33:07 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 3606412 ']' 00:04:39.968 10:33:07 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 3606412 00:04:39.968 10:33:07 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:04:39.968 10:33:07 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:39.968 10:33:07 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3606412 00:04:39.968 10:33:07 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:39.968 10:33:07 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:39.968 10:33:07 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3606412' 00:04:39.968 killing process with pid 3606412 00:04:39.968 10:33:07 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 3606412 00:04:39.968 10:33:07 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 3606412 00:04:40.227 00:04:40.227 real 0m1.160s 00:04:40.227 user 0m1.914s 00:04:40.227 sys 0m0.480s 00:04:40.227 10:33:07 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:40.227 10:33:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:40.227 ************************************ 00:04:40.227 END TEST spdkcli_tcp 00:04:40.227 ************************************ 00:04:40.227 10:33:07 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:40.227 10:33:07 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:40.227 10:33:07 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:40.227 10:33:07 -- common/autotest_common.sh@10 -- # set +x 00:04:40.227 ************************************ 00:04:40.227 START TEST dpdk_mem_utility 00:04:40.227 ************************************ 00:04:40.227 10:33:07 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:40.560 * Looking for test storage... 00:04:40.560 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:04:40.560 10:33:07 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:40.560 10:33:07 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:04:40.560 10:33:07 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:40.560 10:33:08 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:40.560 10:33:08 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:40.560 10:33:08 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:40.560 10:33:08 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:40.560 10:33:08 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.560 10:33:08 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:40.560 10:33:08 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:40.560 10:33:08 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:40.560 10:33:08 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:40.560 10:33:08 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:40.560 10:33:08 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:40.560 10:33:08 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:40.560 10:33:08 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:40.560 10:33:08 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:40.560 10:33:08 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:40.560 10:33:08 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.560 10:33:08 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:40.560 10:33:08 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:40.560 10:33:08 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.560 10:33:08 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:40.560 10:33:08 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:40.560 10:33:08 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:40.560 10:33:08 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:40.560 10:33:08 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.560 10:33:08 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:40.560 10:33:08 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:40.560 10:33:08 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:40.560 10:33:08 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:40.560 10:33:08 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:40.560 10:33:08 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.560 10:33:08 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:40.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.560 --rc genhtml_branch_coverage=1 00:04:40.560 --rc genhtml_function_coverage=1 00:04:40.560 --rc genhtml_legend=1 00:04:40.560 --rc geninfo_all_blocks=1 00:04:40.560 --rc geninfo_unexecuted_blocks=1 00:04:40.560 00:04:40.560 ' 00:04:40.560 10:33:08 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:40.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.560 --rc genhtml_branch_coverage=1 00:04:40.560 --rc genhtml_function_coverage=1 00:04:40.560 --rc genhtml_legend=1 00:04:40.560 --rc geninfo_all_blocks=1 00:04:40.560 --rc geninfo_unexecuted_blocks=1 00:04:40.560 00:04:40.560 ' 00:04:40.560 10:33:08 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:40.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.560 --rc genhtml_branch_coverage=1 00:04:40.560 --rc genhtml_function_coverage=1 00:04:40.560 --rc genhtml_legend=1 00:04:40.560 --rc geninfo_all_blocks=1 00:04:40.560 --rc geninfo_unexecuted_blocks=1 00:04:40.560 00:04:40.560 ' 00:04:40.560 10:33:08 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:40.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.560 --rc genhtml_branch_coverage=1 00:04:40.560 --rc genhtml_function_coverage=1 00:04:40.560 --rc genhtml_legend=1 00:04:40.560 --rc geninfo_all_blocks=1 00:04:40.560 --rc geninfo_unexecuted_blocks=1 00:04:40.560 00:04:40.560 ' 00:04:40.560 10:33:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:40.560 10:33:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3606834 00:04:40.560 10:33:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3606834 00:04:40.560 10:33:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.560 10:33:08 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 3606834 ']' 00:04:40.560 10:33:08 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.560 10:33:08 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:40.560 10:33:08 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.560 10:33:08 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:40.560 10:33:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:40.561 [2024-11-07 10:33:08.110682] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:04:40.561 [2024-11-07 10:33:08.110732] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3606834 ] 00:04:40.561 [2024-11-07 10:33:08.187985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.561 [2024-11-07 10:33:08.226975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.822 10:33:08 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:40.822 10:33:08 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:04:40.822 10:33:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:40.822 10:33:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:40.822 10:33:08 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.822 10:33:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:40.822 { 00:04:40.822 "filename": "/tmp/spdk_mem_dump.txt" 00:04:40.822 } 00:04:40.822 10:33:08 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.822 10:33:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:41.082 DPDK memory size 810.000000 MiB in 1 heap(s) 00:04:41.082 1 heaps totaling size 810.000000 MiB 00:04:41.082 size: 810.000000 MiB heap id: 0 00:04:41.082 end heaps---------- 00:04:41.082 9 mempools totaling size 595.772034 MiB 00:04:41.082 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:41.082 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:41.082 size: 92.545471 MiB name: bdev_io_3606834 00:04:41.082 size: 50.003479 MiB name: msgpool_3606834 00:04:41.082 size: 36.509338 MiB name: fsdev_io_3606834 00:04:41.082 size: 21.763794 MiB name: PDU_Pool 00:04:41.082 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:41.082 size: 4.133484 MiB name: evtpool_3606834 00:04:41.082 size: 0.026123 MiB name: Session_Pool 00:04:41.082 end mempools------- 00:04:41.082 6 memzones totaling size 4.142822 MiB 00:04:41.082 size: 1.000366 MiB name: RG_ring_0_3606834 00:04:41.082 size: 1.000366 MiB name: RG_ring_1_3606834 00:04:41.082 size: 1.000366 MiB name: RG_ring_4_3606834 00:04:41.082 size: 1.000366 MiB name: RG_ring_5_3606834 00:04:41.082 size: 0.125366 MiB name: RG_ring_2_3606834 00:04:41.082 size: 0.015991 MiB name: RG_ring_3_3606834 00:04:41.083 end memzones------- 00:04:41.083 10:33:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:41.083 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:41.083 list of free elements. size: 10.862488 MiB 00:04:41.083 element at address: 0x200018a00000 with size: 0.999878 MiB 00:04:41.083 element at address: 0x200018c00000 with size: 0.999878 MiB 00:04:41.083 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:41.083 element at address: 0x200031800000 with size: 0.994446 MiB 00:04:41.083 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:41.083 element at address: 0x200012c00000 with size: 0.954285 MiB 00:04:41.083 element at address: 0x200018e00000 with size: 0.936584 MiB 00:04:41.083 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:41.083 element at address: 0x20001a600000 with size: 0.582886 MiB 00:04:41.083 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:41.083 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:41.083 element at address: 0x200019000000 with size: 0.485657 MiB 00:04:41.083 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:41.083 element at address: 0x200027a00000 with size: 0.410034 MiB 00:04:41.083 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:41.083 list of standard malloc elements. size: 199.218628 MiB 00:04:41.083 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:41.083 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:41.083 element at address: 0x200018afff80 with size: 1.000122 MiB 00:04:41.083 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:04:41.083 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:41.083 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:41.083 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:04:41.083 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:41.083 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:04:41.083 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:41.083 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:41.083 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:41.083 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:41.083 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:41.083 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:41.083 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:41.083 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:41.083 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:41.083 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:41.083 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:41.083 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:41.083 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:41.083 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:41.083 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:41.083 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:41.083 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:41.083 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:41.083 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:41.083 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:41.083 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:41.083 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:41.083 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:41.083 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:41.083 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:04:41.083 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:04:41.083 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:04:41.083 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:04:41.083 element at address: 0x20001a695380 with size: 0.000183 MiB 00:04:41.083 element at address: 0x20001a695440 with size: 0.000183 MiB 00:04:41.083 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:04:41.083 element at address: 0x200027a69040 with size: 0.000183 MiB 00:04:41.083 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:04:41.083 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:04:41.083 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:04:41.083 list of memzone associated elements. size: 599.918884 MiB 00:04:41.083 element at address: 0x20001a695500 with size: 211.416748 MiB 00:04:41.083 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:41.083 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:04:41.083 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:41.083 element at address: 0x200012df4780 with size: 92.045044 MiB 00:04:41.083 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_3606834_0 00:04:41.083 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:41.083 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3606834_0 00:04:41.083 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:41.083 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3606834_0 00:04:41.083 element at address: 0x2000191be940 with size: 20.255554 MiB 00:04:41.083 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:41.083 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:04:41.083 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:41.083 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:41.083 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3606834_0 00:04:41.083 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:41.083 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3606834 00:04:41.083 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:41.083 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3606834 00:04:41.083 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:41.083 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:41.083 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:04:41.083 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:41.083 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:41.083 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:41.083 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:41.083 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:41.083 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:41.083 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3606834 00:04:41.083 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:41.083 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3606834 00:04:41.083 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:04:41.083 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3606834 00:04:41.083 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:04:41.083 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3606834 00:04:41.083 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:41.083 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3606834 00:04:41.083 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:41.083 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3606834 00:04:41.083 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:41.083 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:41.083 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:41.083 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:41.083 element at address: 0x20001907c540 with size: 0.250488 MiB 00:04:41.083 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:41.083 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:41.083 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3606834 00:04:41.083 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:41.083 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3606834 00:04:41.083 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:41.083 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:41.083 element at address: 0x200027a69100 with size: 0.023743 MiB 00:04:41.083 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:41.083 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:41.083 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3606834 00:04:41.083 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:04:41.083 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:41.084 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:41.084 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3606834 00:04:41.084 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:41.084 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3606834 00:04:41.084 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:41.084 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3606834 00:04:41.084 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:04:41.084 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:41.084 10:33:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:41.084 10:33:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3606834 00:04:41.084 10:33:08 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 3606834 ']' 00:04:41.084 10:33:08 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 3606834 00:04:41.084 10:33:08 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:04:41.084 10:33:08 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:41.084 10:33:08 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3606834 00:04:41.084 10:33:08 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:41.084 10:33:08 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:41.084 10:33:08 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3606834' 00:04:41.084 killing process with pid 3606834 00:04:41.084 10:33:08 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 3606834 00:04:41.084 10:33:08 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 3606834 00:04:41.344 00:04:41.344 real 0m1.056s 00:04:41.344 user 0m7.923s 00:04:41.344 sys 0m5.081s 00:04:41.344 10:33:08 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:41.344 10:33:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:41.344 ************************************ 00:04:41.344 END TEST dpdk_mem_utility 00:04:41.344 ************************************ 00:04:41.344 10:33:08 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:04:41.344 10:33:08 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:41.344 10:33:08 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:41.344 10:33:08 -- common/autotest_common.sh@10 -- # set +x 00:04:41.344 ************************************ 00:04:41.344 START TEST event 00:04:41.344 ************************************ 00:04:41.344 10:33:08 event -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:04:41.603 * Looking for test storage... 00:04:41.603 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:04:41.603 10:33:09 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:41.603 10:33:09 event -- common/autotest_common.sh@1691 -- # lcov --version 00:04:41.603 10:33:09 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:41.603 10:33:09 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:41.603 10:33:09 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.603 10:33:09 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.603 10:33:09 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.603 10:33:09 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.603 10:33:09 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.603 10:33:09 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.603 10:33:09 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.603 10:33:09 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.603 10:33:09 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.603 10:33:09 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.603 10:33:09 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.603 10:33:09 event -- scripts/common.sh@344 -- # case "$op" in 00:04:41.603 10:33:09 event -- scripts/common.sh@345 -- # : 1 00:04:41.603 10:33:09 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.603 10:33:09 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.603 10:33:09 event -- scripts/common.sh@365 -- # decimal 1 00:04:41.603 10:33:09 event -- scripts/common.sh@353 -- # local d=1 00:04:41.603 10:33:09 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.603 10:33:09 event -- scripts/common.sh@355 -- # echo 1 00:04:41.603 10:33:09 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.603 10:33:09 event -- scripts/common.sh@366 -- # decimal 2 00:04:41.603 10:33:09 event -- scripts/common.sh@353 -- # local d=2 00:04:41.603 10:33:09 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.603 10:33:09 event -- scripts/common.sh@355 -- # echo 2 00:04:41.603 10:33:09 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.603 10:33:09 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.603 10:33:09 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.603 10:33:09 event -- scripts/common.sh@368 -- # return 0 00:04:41.603 10:33:09 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.604 10:33:09 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:41.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.604 --rc genhtml_branch_coverage=1 00:04:41.604 --rc genhtml_function_coverage=1 00:04:41.604 --rc genhtml_legend=1 00:04:41.604 --rc geninfo_all_blocks=1 00:04:41.604 --rc geninfo_unexecuted_blocks=1 00:04:41.604 00:04:41.604 ' 00:04:41.604 10:33:09 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:41.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.604 --rc genhtml_branch_coverage=1 00:04:41.604 --rc genhtml_function_coverage=1 00:04:41.604 --rc genhtml_legend=1 00:04:41.604 --rc geninfo_all_blocks=1 00:04:41.604 --rc geninfo_unexecuted_blocks=1 00:04:41.604 00:04:41.604 ' 00:04:41.604 10:33:09 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:41.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.604 --rc genhtml_branch_coverage=1 00:04:41.604 --rc genhtml_function_coverage=1 00:04:41.604 --rc genhtml_legend=1 00:04:41.604 --rc geninfo_all_blocks=1 00:04:41.604 --rc geninfo_unexecuted_blocks=1 00:04:41.604 00:04:41.604 ' 00:04:41.604 10:33:09 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:41.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.604 --rc genhtml_branch_coverage=1 00:04:41.604 --rc genhtml_function_coverage=1 00:04:41.604 --rc genhtml_legend=1 00:04:41.604 --rc geninfo_all_blocks=1 00:04:41.604 --rc geninfo_unexecuted_blocks=1 00:04:41.604 00:04:41.604 ' 00:04:41.604 10:33:09 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:41.604 10:33:09 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:41.604 10:33:09 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:41.604 10:33:09 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:04:41.604 10:33:09 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:41.604 10:33:09 event -- common/autotest_common.sh@10 -- # set +x 00:04:41.604 ************************************ 00:04:41.604 START TEST event_perf 00:04:41.604 ************************************ 00:04:41.604 10:33:09 event.event_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:41.604 Running I/O for 1 seconds...[2024-11-07 10:33:09.186825] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:04:41.604 [2024-11-07 10:33:09.186903] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3607346 ] 00:04:41.604 [2024-11-07 10:33:09.263592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:41.863 [2024-11-07 10:33:09.306344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:41.863 [2024-11-07 10:33:09.306438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:41.863 [2024-11-07 10:33:09.306533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:41.863 [2024-11-07 10:33:09.306552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.803 Running I/O for 1 seconds... 00:04:42.803 lcore 0: 214493 00:04:42.803 lcore 1: 214493 00:04:42.803 lcore 2: 214492 00:04:42.803 lcore 3: 214492 00:04:42.803 done. 00:04:42.803 00:04:42.803 real 0m1.185s 00:04:42.803 user 0m4.099s 00:04:42.803 sys 0m0.084s 00:04:42.803 10:33:10 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:42.803 10:33:10 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:42.803 ************************************ 00:04:42.803 END TEST event_perf 00:04:42.803 ************************************ 00:04:42.803 10:33:10 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:42.803 10:33:10 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:04:42.803 10:33:10 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:42.803 10:33:10 event -- common/autotest_common.sh@10 -- # set +x 00:04:42.804 ************************************ 00:04:42.804 START TEST event_reactor 00:04:42.804 ************************************ 00:04:42.804 10:33:10 event.event_reactor -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:42.804 [2024-11-07 10:33:10.422138] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:04:42.804 [2024-11-07 10:33:10.422229] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3607493 ] 00:04:43.062 [2024-11-07 10:33:10.503364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.062 [2024-11-07 10:33:10.541272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.000 test_start 00:04:44.000 oneshot 00:04:44.000 tick 100 00:04:44.000 tick 100 00:04:44.000 tick 250 00:04:44.000 tick 100 00:04:44.000 tick 100 00:04:44.000 tick 100 00:04:44.000 tick 250 00:04:44.000 tick 500 00:04:44.000 tick 100 00:04:44.000 tick 100 00:04:44.000 tick 250 00:04:44.000 tick 100 00:04:44.000 tick 100 00:04:44.000 test_end 00:04:44.000 00:04:44.000 real 0m1.178s 00:04:44.000 user 0m1.094s 00:04:44.000 sys 0m0.079s 00:04:44.000 10:33:11 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:44.000 10:33:11 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:44.000 ************************************ 00:04:44.000 END TEST event_reactor 00:04:44.000 ************************************ 00:04:44.000 10:33:11 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:44.000 10:33:11 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:04:44.000 10:33:11 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:44.000 10:33:11 event -- common/autotest_common.sh@10 -- # set +x 00:04:44.000 ************************************ 00:04:44.000 START TEST event_reactor_perf 00:04:44.000 ************************************ 00:04:44.000 10:33:11 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:44.000 [2024-11-07 10:33:11.651016] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:04:44.000 [2024-11-07 10:33:11.651117] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3607774 ] 00:04:44.258 [2024-11-07 10:33:11.727801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.258 [2024-11-07 10:33:11.764806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.193 test_start 00:04:45.193 test_end 00:04:45.193 Performance: 533412 events per second 00:04:45.193 00:04:45.193 real 0m1.173s 00:04:45.193 user 0m1.086s 00:04:45.193 sys 0m0.083s 00:04:45.193 10:33:12 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:45.193 10:33:12 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:45.193 ************************************ 00:04:45.193 END TEST event_reactor_perf 00:04:45.193 ************************************ 00:04:45.193 10:33:12 event -- event/event.sh@49 -- # uname -s 00:04:45.193 10:33:12 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:45.193 10:33:12 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:45.193 10:33:12 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:45.193 10:33:12 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:45.193 10:33:12 event -- common/autotest_common.sh@10 -- # set +x 00:04:45.193 ************************************ 00:04:45.193 START TEST event_scheduler 00:04:45.193 ************************************ 00:04:45.193 10:33:12 event.event_scheduler -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:45.452 * Looking for test storage... 00:04:45.452 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:04:45.452 10:33:12 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:45.452 10:33:12 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:04:45.452 10:33:12 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:45.452 10:33:12 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:45.452 10:33:12 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:45.452 10:33:12 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:45.452 10:33:12 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:45.452 10:33:12 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.452 10:33:12 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:45.452 10:33:12 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:45.452 10:33:12 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:45.452 10:33:12 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:45.452 10:33:12 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:45.452 10:33:13 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:45.452 10:33:13 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:45.452 10:33:13 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:45.452 10:33:13 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:45.452 10:33:13 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:45.452 10:33:13 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.452 10:33:13 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:45.452 10:33:13 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:45.452 10:33:13 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.452 10:33:13 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:45.452 10:33:13 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:45.452 10:33:13 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:45.452 10:33:13 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:45.452 10:33:13 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.452 10:33:13 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:45.452 10:33:13 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:45.452 10:33:13 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:45.452 10:33:13 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:45.452 10:33:13 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:45.452 10:33:13 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.452 10:33:13 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:45.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.452 --rc genhtml_branch_coverage=1 00:04:45.452 --rc genhtml_function_coverage=1 00:04:45.452 --rc genhtml_legend=1 00:04:45.452 --rc geninfo_all_blocks=1 00:04:45.452 --rc geninfo_unexecuted_blocks=1 00:04:45.452 00:04:45.452 ' 00:04:45.452 10:33:13 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:45.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.452 --rc genhtml_branch_coverage=1 00:04:45.452 --rc genhtml_function_coverage=1 00:04:45.452 --rc genhtml_legend=1 00:04:45.452 --rc geninfo_all_blocks=1 00:04:45.452 --rc geninfo_unexecuted_blocks=1 00:04:45.452 00:04:45.452 ' 00:04:45.452 10:33:13 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:45.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.452 --rc genhtml_branch_coverage=1 00:04:45.452 --rc genhtml_function_coverage=1 00:04:45.452 --rc genhtml_legend=1 00:04:45.452 --rc geninfo_all_blocks=1 00:04:45.452 --rc geninfo_unexecuted_blocks=1 00:04:45.452 00:04:45.452 ' 00:04:45.452 10:33:13 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:45.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.452 --rc genhtml_branch_coverage=1 00:04:45.453 --rc genhtml_function_coverage=1 00:04:45.453 --rc genhtml_legend=1 00:04:45.453 --rc geninfo_all_blocks=1 00:04:45.453 --rc geninfo_unexecuted_blocks=1 00:04:45.453 00:04:45.453 ' 00:04:45.453 10:33:13 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:45.453 10:33:13 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3608081 00:04:45.453 10:33:13 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:45.453 10:33:13 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:45.453 10:33:13 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3608081 00:04:45.453 10:33:13 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 3608081 ']' 00:04:45.453 10:33:13 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.453 10:33:13 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:45.453 10:33:13 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.453 10:33:13 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:45.453 10:33:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:45.453 [2024-11-07 10:33:13.059378] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:04:45.453 [2024-11-07 10:33:13.059431] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3608081 ] 00:04:45.712 [2024-11-07 10:33:13.129919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:45.712 [2024-11-07 10:33:13.171376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.712 [2024-11-07 10:33:13.171443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:45.712 [2024-11-07 10:33:13.171532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:45.712 [2024-11-07 10:33:13.171550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:45.712 10:33:13 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:45.712 10:33:13 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:04:45.712 10:33:13 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:45.712 10:33:13 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.712 10:33:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:45.712 [2024-11-07 10:33:13.228173] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:45.712 [2024-11-07 10:33:13.228194] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:45.712 [2024-11-07 10:33:13.228205] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:45.712 [2024-11-07 10:33:13.228212] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:45.712 [2024-11-07 10:33:13.228219] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:45.712 10:33:13 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.712 10:33:13 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:45.713 10:33:13 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.713 10:33:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:45.713 [2024-11-07 10:33:13.301865] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:45.713 10:33:13 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.713 10:33:13 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:45.713 10:33:13 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:45.713 10:33:13 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:45.713 10:33:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:45.713 ************************************ 00:04:45.713 START TEST scheduler_create_thread 00:04:45.713 ************************************ 00:04:45.713 10:33:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:04:45.713 10:33:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:45.713 10:33:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.713 10:33:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.713 2 00:04:45.713 10:33:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.713 10:33:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:45.713 10:33:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.713 10:33:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.713 3 00:04:45.713 10:33:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.713 10:33:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:45.713 10:33:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.713 10:33:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.713 4 00:04:45.713 10:33:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.713 10:33:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:45.713 10:33:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.713 10:33:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.713 5 00:04:45.713 10:33:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.713 10:33:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:45.713 10:33:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.713 10:33:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.713 6 00:04:45.713 10:33:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.713 10:33:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:45.713 10:33:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.713 10:33:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.713 7 00:04:45.713 10:33:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.713 10:33:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:45.713 10:33:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.713 10:33:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.713 8 00:04:45.713 10:33:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.713 10:33:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:45.713 10:33:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.713 10:33:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.972 9 00:04:45.972 10:33:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.972 10:33:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:45.972 10:33:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.972 10:33:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.972 10 00:04:45.972 10:33:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.972 10:33:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:45.972 10:33:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.972 10:33:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.972 10:33:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.972 10:33:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:45.972 10:33:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:45.972 10:33:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.972 10:33:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.540 10:33:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.540 10:33:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:46.540 10:33:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.540 10:33:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.919 10:33:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.919 10:33:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:47.919 10:33:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:47.919 10:33:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.919 10:33:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.857 10:33:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.857 00:04:48.857 real 0m3.101s 00:04:48.857 user 0m0.026s 00:04:48.857 sys 0m0.006s 00:04:48.857 10:33:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:48.857 10:33:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.857 ************************************ 00:04:48.857 END TEST scheduler_create_thread 00:04:48.857 ************************************ 00:04:48.857 10:33:16 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:48.857 10:33:16 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3608081 00:04:48.857 10:33:16 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 3608081 ']' 00:04:48.857 10:33:16 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 3608081 00:04:48.857 10:33:16 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:04:48.857 10:33:16 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:48.857 10:33:16 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3608081 00:04:48.857 10:33:16 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:04:48.857 10:33:16 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:04:48.857 10:33:16 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3608081' 00:04:48.857 killing process with pid 3608081 00:04:48.857 10:33:16 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 3608081 00:04:48.857 10:33:16 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 3608081 00:04:49.426 [2024-11-07 10:33:16.792870] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:49.426 00:04:49.426 real 0m4.123s 00:04:49.426 user 0m6.593s 00:04:49.426 sys 0m0.391s 00:04:49.426 10:33:16 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:49.426 10:33:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:49.426 ************************************ 00:04:49.426 END TEST event_scheduler 00:04:49.426 ************************************ 00:04:49.426 10:33:17 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:49.426 10:33:17 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:49.426 10:33:17 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:49.426 10:33:17 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:49.426 10:33:17 event -- common/autotest_common.sh@10 -- # set +x 00:04:49.426 ************************************ 00:04:49.426 START TEST app_repeat 00:04:49.426 ************************************ 00:04:49.426 10:33:17 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:04:49.426 10:33:17 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.426 10:33:17 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.426 10:33:17 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:49.426 10:33:17 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:49.426 10:33:17 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:49.426 10:33:17 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:49.426 10:33:17 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:49.426 10:33:17 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:49.426 10:33:17 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3608853 00:04:49.426 10:33:17 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:49.426 10:33:17 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3608853' 00:04:49.426 Process app_repeat pid: 3608853 00:04:49.426 10:33:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:49.426 10:33:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:49.426 spdk_app_start Round 0 00:04:49.426 10:33:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3608853 /var/tmp/spdk-nbd.sock 00:04:49.426 10:33:17 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3608853 ']' 00:04:49.426 10:33:17 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:49.426 10:33:17 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:49.426 10:33:17 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:49.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:49.426 10:33:17 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:49.426 10:33:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:49.426 [2024-11-07 10:33:17.056598] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:04:49.426 [2024-11-07 10:33:17.056653] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3608853 ] 00:04:49.686 [2024-11-07 10:33:17.131677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:49.686 [2024-11-07 10:33:17.174234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:49.686 [2024-11-07 10:33:17.174239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.686 10:33:17 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:49.686 10:33:17 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:49.686 10:33:17 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:49.945 Malloc0 00:04:49.945 10:33:17 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:50.205 Malloc1 00:04:50.205 10:33:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:50.205 10:33:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.205 10:33:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:50.205 10:33:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:50.205 10:33:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.205 10:33:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:50.205 10:33:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:50.205 10:33:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.205 10:33:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:50.205 10:33:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:50.205 10:33:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.205 10:33:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:50.205 10:33:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:50.205 10:33:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:50.205 10:33:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:50.205 10:33:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:50.205 /dev/nbd0 00:04:50.464 10:33:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:50.464 10:33:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:50.464 10:33:17 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:50.464 10:33:17 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:50.464 10:33:17 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:50.464 10:33:17 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:50.464 10:33:17 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:50.464 10:33:17 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:50.464 10:33:17 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:50.464 10:33:17 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:50.464 10:33:17 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:50.464 1+0 records in 00:04:50.464 1+0 records out 00:04:50.464 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215319 s, 19.0 MB/s 00:04:50.464 10:33:17 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:50.464 10:33:17 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:50.464 10:33:17 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:50.464 10:33:17 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:50.464 10:33:17 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:50.464 10:33:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:50.464 10:33:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:50.464 10:33:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:50.464 /dev/nbd1 00:04:50.464 10:33:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:50.464 10:33:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:50.464 10:33:18 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:50.464 10:33:18 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:50.464 10:33:18 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:50.464 10:33:18 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:50.464 10:33:18 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:50.723 10:33:18 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:50.723 10:33:18 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:50.723 10:33:18 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:50.724 10:33:18 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:50.724 1+0 records in 00:04:50.724 1+0 records out 00:04:50.724 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260097 s, 15.7 MB/s 00:04:50.724 10:33:18 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:50.724 10:33:18 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:50.724 10:33:18 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:50.724 10:33:18 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:50.724 10:33:18 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:50.724 10:33:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:50.724 10:33:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:50.724 10:33:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:50.724 10:33:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.724 10:33:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:50.724 10:33:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:50.724 { 00:04:50.724 "nbd_device": "/dev/nbd0", 00:04:50.724 "bdev_name": "Malloc0" 00:04:50.724 }, 00:04:50.724 { 00:04:50.724 "nbd_device": "/dev/nbd1", 00:04:50.724 "bdev_name": "Malloc1" 00:04:50.724 } 00:04:50.724 ]' 00:04:50.724 10:33:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:50.724 { 00:04:50.724 "nbd_device": "/dev/nbd0", 00:04:50.724 "bdev_name": "Malloc0" 00:04:50.724 }, 00:04:50.724 { 00:04:50.724 "nbd_device": "/dev/nbd1", 00:04:50.724 "bdev_name": "Malloc1" 00:04:50.724 } 00:04:50.724 ]' 00:04:50.724 10:33:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:50.724 10:33:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:50.724 /dev/nbd1' 00:04:50.724 10:33:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:50.724 /dev/nbd1' 00:04:50.724 10:33:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:50.984 10:33:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:50.984 10:33:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:50.984 10:33:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:50.984 10:33:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:50.984 10:33:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:50.984 10:33:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.984 10:33:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:50.984 10:33:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:50.984 10:33:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:50.984 10:33:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:50.984 10:33:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:50.984 256+0 records in 00:04:50.984 256+0 records out 00:04:50.984 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102493 s, 102 MB/s 00:04:50.984 10:33:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:50.984 10:33:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:50.984 256+0 records in 00:04:50.984 256+0 records out 00:04:50.984 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0191872 s, 54.6 MB/s 00:04:50.984 10:33:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:50.984 10:33:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:50.984 256+0 records in 00:04:50.984 256+0 records out 00:04:50.984 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202409 s, 51.8 MB/s 00:04:50.984 10:33:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:50.984 10:33:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.984 10:33:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:50.984 10:33:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:50.984 10:33:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:50.984 10:33:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:50.984 10:33:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:50.984 10:33:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:50.984 10:33:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:50.984 10:33:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:50.984 10:33:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:50.984 10:33:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:50.984 10:33:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:50.984 10:33:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.984 10:33:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.984 10:33:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:50.984 10:33:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:50.984 10:33:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:50.984 10:33:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:51.243 10:33:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:51.243 10:33:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:51.243 10:33:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:51.243 10:33:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:51.243 10:33:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:51.243 10:33:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:51.243 10:33:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:51.243 10:33:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:51.243 10:33:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:51.243 10:33:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:51.243 10:33:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:51.243 10:33:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:51.243 10:33:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:51.243 10:33:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:51.243 10:33:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:51.243 10:33:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:51.243 10:33:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:51.243 10:33:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:51.243 10:33:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:51.243 10:33:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.243 10:33:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:51.502 10:33:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:51.502 10:33:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:51.502 10:33:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:51.502 10:33:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:51.502 10:33:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:51.503 10:33:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:51.503 10:33:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:51.503 10:33:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:51.503 10:33:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:51.503 10:33:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:51.503 10:33:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:51.503 10:33:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:51.503 10:33:19 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:51.762 10:33:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:52.022 [2024-11-07 10:33:19.489329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:52.022 [2024-11-07 10:33:19.523812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:52.022 [2024-11-07 10:33:19.523815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.022 [2024-11-07 10:33:19.564500] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:52.022 [2024-11-07 10:33:19.564548] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:55.311 10:33:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:55.311 10:33:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:55.311 spdk_app_start Round 1 00:04:55.311 10:33:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3608853 /var/tmp/spdk-nbd.sock 00:04:55.311 10:33:22 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3608853 ']' 00:04:55.311 10:33:22 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:55.311 10:33:22 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:55.311 10:33:22 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:55.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:55.311 10:33:22 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:55.311 10:33:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:55.311 10:33:22 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:55.311 10:33:22 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:55.311 10:33:22 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:55.311 Malloc0 00:04:55.311 10:33:22 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:55.311 Malloc1 00:04:55.311 10:33:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:55.311 10:33:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.311 10:33:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:55.311 10:33:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:55.311 10:33:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.311 10:33:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:55.311 10:33:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:55.311 10:33:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.311 10:33:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:55.311 10:33:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:55.311 10:33:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.311 10:33:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:55.311 10:33:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:55.311 10:33:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:55.311 10:33:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:55.311 10:33:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:55.571 /dev/nbd0 00:04:55.571 10:33:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:55.571 10:33:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:55.571 10:33:23 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:55.571 10:33:23 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:55.571 10:33:23 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:55.571 10:33:23 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:55.571 10:33:23 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:55.571 10:33:23 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:55.571 10:33:23 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:55.571 10:33:23 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:55.571 10:33:23 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:55.571 1+0 records in 00:04:55.571 1+0 records out 00:04:55.571 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235598 s, 17.4 MB/s 00:04:55.571 10:33:23 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:55.571 10:33:23 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:55.571 10:33:23 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:55.571 10:33:23 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:55.571 10:33:23 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:55.571 10:33:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:55.571 10:33:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:55.571 10:33:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:55.830 /dev/nbd1 00:04:55.830 10:33:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:55.830 10:33:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:55.830 10:33:23 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:55.830 10:33:23 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:55.830 10:33:23 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:55.830 10:33:23 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:55.830 10:33:23 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:55.830 10:33:23 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:55.830 10:33:23 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:55.830 10:33:23 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:55.830 10:33:23 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:55.830 1+0 records in 00:04:55.830 1+0 records out 00:04:55.830 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260738 s, 15.7 MB/s 00:04:55.830 10:33:23 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:55.830 10:33:23 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:55.830 10:33:23 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:55.830 10:33:23 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:55.830 10:33:23 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:55.830 10:33:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:55.830 10:33:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:55.830 10:33:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:55.830 10:33:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.830 10:33:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:56.089 10:33:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:56.089 { 00:04:56.089 "nbd_device": "/dev/nbd0", 00:04:56.089 "bdev_name": "Malloc0" 00:04:56.089 }, 00:04:56.089 { 00:04:56.089 "nbd_device": "/dev/nbd1", 00:04:56.089 "bdev_name": "Malloc1" 00:04:56.089 } 00:04:56.089 ]' 00:04:56.089 10:33:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:56.089 { 00:04:56.089 "nbd_device": "/dev/nbd0", 00:04:56.089 "bdev_name": "Malloc0" 00:04:56.089 }, 00:04:56.089 { 00:04:56.089 "nbd_device": "/dev/nbd1", 00:04:56.089 "bdev_name": "Malloc1" 00:04:56.089 } 00:04:56.089 ]' 00:04:56.089 10:33:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:56.089 10:33:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:56.089 /dev/nbd1' 00:04:56.089 10:33:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:56.089 /dev/nbd1' 00:04:56.089 10:33:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:56.089 10:33:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:56.089 10:33:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:56.090 10:33:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:56.090 10:33:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:56.090 10:33:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:56.090 10:33:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.090 10:33:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:56.090 10:33:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:56.090 10:33:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:56.090 10:33:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:56.090 10:33:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:56.090 256+0 records in 00:04:56.090 256+0 records out 00:04:56.090 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00344129 s, 305 MB/s 00:04:56.090 10:33:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:56.090 10:33:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:56.090 256+0 records in 00:04:56.090 256+0 records out 00:04:56.090 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0189997 s, 55.2 MB/s 00:04:56.090 10:33:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:56.090 10:33:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:56.090 256+0 records in 00:04:56.090 256+0 records out 00:04:56.090 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0197781 s, 53.0 MB/s 00:04:56.090 10:33:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:56.090 10:33:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.090 10:33:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:56.090 10:33:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:56.090 10:33:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:56.090 10:33:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:56.090 10:33:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:56.090 10:33:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:56.090 10:33:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:56.090 10:33:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:56.090 10:33:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:56.090 10:33:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:56.090 10:33:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:56.090 10:33:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.090 10:33:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.090 10:33:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:56.090 10:33:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:56.090 10:33:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:56.090 10:33:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:56.349 10:33:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:56.349 10:33:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:56.349 10:33:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:56.349 10:33:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:56.349 10:33:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:56.349 10:33:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:56.349 10:33:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:56.349 10:33:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:56.349 10:33:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:56.349 10:33:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:56.608 10:33:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:56.608 10:33:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:56.608 10:33:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:56.608 10:33:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:56.608 10:33:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:56.608 10:33:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:56.608 10:33:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:56.608 10:33:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:56.608 10:33:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:56.608 10:33:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.608 10:33:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:56.867 10:33:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:56.867 10:33:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:56.867 10:33:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:56.867 10:33:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:56.867 10:33:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:56.867 10:33:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:56.867 10:33:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:56.867 10:33:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:56.867 10:33:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:56.867 10:33:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:56.867 10:33:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:56.867 10:33:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:56.867 10:33:24 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:57.127 10:33:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:57.127 [2024-11-07 10:33:24.727952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:57.127 [2024-11-07 10:33:24.763115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.127 [2024-11-07 10:33:24.763117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.385 [2024-11-07 10:33:24.804915] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:57.386 [2024-11-07 10:33:24.804953] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:59.921 10:33:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:59.921 10:33:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:59.921 spdk_app_start Round 2 00:04:59.921 10:33:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3608853 /var/tmp/spdk-nbd.sock 00:04:59.921 10:33:27 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3608853 ']' 00:04:59.921 10:33:27 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:59.921 10:33:27 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:59.921 10:33:27 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:59.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:59.921 10:33:27 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:59.921 10:33:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:00.180 10:33:27 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:00.180 10:33:27 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:00.180 10:33:27 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:00.439 Malloc0 00:05:00.439 10:33:27 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:00.698 Malloc1 00:05:00.698 10:33:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:00.698 10:33:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.698 10:33:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.698 10:33:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:00.698 10:33:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.698 10:33:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:00.698 10:33:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:00.698 10:33:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.698 10:33:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.698 10:33:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:00.698 10:33:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.698 10:33:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:00.698 10:33:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:00.698 10:33:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:00.698 10:33:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.698 10:33:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:00.698 /dev/nbd0 00:05:00.698 10:33:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:00.957 10:33:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:00.957 10:33:28 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:00.957 10:33:28 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:00.957 10:33:28 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:00.957 10:33:28 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:00.957 10:33:28 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:00.957 10:33:28 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:00.957 10:33:28 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:00.957 10:33:28 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:00.957 10:33:28 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:00.957 1+0 records in 00:05:00.957 1+0 records out 00:05:00.957 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218804 s, 18.7 MB/s 00:05:00.957 10:33:28 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:00.957 10:33:28 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:00.957 10:33:28 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:00.957 10:33:28 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:00.957 10:33:28 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:00.957 10:33:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:00.957 10:33:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.957 10:33:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:00.957 /dev/nbd1 00:05:00.957 10:33:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:00.957 10:33:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:00.957 10:33:28 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:00.957 10:33:28 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:00.957 10:33:28 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:00.957 10:33:28 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:00.957 10:33:28 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:00.957 10:33:28 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:00.957 10:33:28 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:00.957 10:33:28 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:00.957 10:33:28 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:00.957 1+0 records in 00:05:00.957 1+0 records out 00:05:00.957 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000256813 s, 15.9 MB/s 00:05:00.957 10:33:28 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:00.957 10:33:28 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:00.957 10:33:28 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:00.957 10:33:28 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:00.957 10:33:28 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:00.957 10:33:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:00.957 10:33:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.957 10:33:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:01.217 10:33:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.217 10:33:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:01.217 10:33:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:01.217 { 00:05:01.217 "nbd_device": "/dev/nbd0", 00:05:01.217 "bdev_name": "Malloc0" 00:05:01.217 }, 00:05:01.217 { 00:05:01.217 "nbd_device": "/dev/nbd1", 00:05:01.217 "bdev_name": "Malloc1" 00:05:01.217 } 00:05:01.217 ]' 00:05:01.217 10:33:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:01.217 { 00:05:01.217 "nbd_device": "/dev/nbd0", 00:05:01.217 "bdev_name": "Malloc0" 00:05:01.217 }, 00:05:01.217 { 00:05:01.217 "nbd_device": "/dev/nbd1", 00:05:01.217 "bdev_name": "Malloc1" 00:05:01.217 } 00:05:01.217 ]' 00:05:01.217 10:33:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:01.217 10:33:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:01.217 /dev/nbd1' 00:05:01.217 10:33:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:01.217 /dev/nbd1' 00:05:01.217 10:33:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:01.217 10:33:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:01.217 10:33:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:01.217 10:33:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:01.217 10:33:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:01.217 10:33:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:01.217 10:33:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.217 10:33:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:01.217 10:33:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:01.217 10:33:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:01.217 10:33:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:01.217 10:33:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:01.217 256+0 records in 00:05:01.217 256+0 records out 00:05:01.217 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108084 s, 97.0 MB/s 00:05:01.217 10:33:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:01.217 10:33:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:01.476 256+0 records in 00:05:01.476 256+0 records out 00:05:01.476 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0193335 s, 54.2 MB/s 00:05:01.476 10:33:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:01.476 10:33:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:01.476 256+0 records in 00:05:01.476 256+0 records out 00:05:01.476 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0205802 s, 51.0 MB/s 00:05:01.476 10:33:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:01.476 10:33:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.476 10:33:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:01.476 10:33:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:01.476 10:33:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:01.476 10:33:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:01.476 10:33:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:01.476 10:33:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:01.476 10:33:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:01.476 10:33:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:01.476 10:33:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:01.476 10:33:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:01.476 10:33:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:01.476 10:33:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.476 10:33:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.476 10:33:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:01.476 10:33:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:01.476 10:33:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:01.476 10:33:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:01.736 10:33:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:01.736 10:33:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:01.736 10:33:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:01.736 10:33:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:01.736 10:33:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:01.736 10:33:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:01.736 10:33:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:01.736 10:33:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:01.736 10:33:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:01.736 10:33:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:01.736 10:33:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:01.736 10:33:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:01.736 10:33:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:01.736 10:33:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:01.736 10:33:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:01.736 10:33:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:01.736 10:33:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:01.736 10:33:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:01.736 10:33:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:01.736 10:33:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.736 10:33:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:01.995 10:33:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:01.995 10:33:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:01.995 10:33:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:01.995 10:33:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:01.995 10:33:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:01.995 10:33:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:01.995 10:33:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:01.995 10:33:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:01.995 10:33:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:01.995 10:33:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:01.995 10:33:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:01.995 10:33:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:01.995 10:33:29 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:02.253 10:33:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:02.511 [2024-11-07 10:33:29.971881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:02.511 [2024-11-07 10:33:30.006466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.511 [2024-11-07 10:33:30.006469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.511 [2024-11-07 10:33:30.047155] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:02.511 [2024-11-07 10:33:30.047200] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:05.802 10:33:32 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3608853 /var/tmp/spdk-nbd.sock 00:05:05.802 10:33:32 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3608853 ']' 00:05:05.802 10:33:32 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:05.802 10:33:32 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:05.802 10:33:32 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:05.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:05.802 10:33:32 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:05.802 10:33:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:05.802 10:33:33 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:05.803 10:33:33 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:05.803 10:33:33 event.app_repeat -- event/event.sh@39 -- # killprocess 3608853 00:05:05.803 10:33:33 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 3608853 ']' 00:05:05.803 10:33:33 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 3608853 00:05:05.803 10:33:33 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:05:05.803 10:33:33 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:05.803 10:33:33 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3608853 00:05:05.803 10:33:33 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:05.803 10:33:33 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:05.803 10:33:33 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3608853' 00:05:05.803 killing process with pid 3608853 00:05:05.803 10:33:33 event.app_repeat -- common/autotest_common.sh@971 -- # kill 3608853 00:05:05.803 10:33:33 event.app_repeat -- common/autotest_common.sh@976 -- # wait 3608853 00:05:05.803 spdk_app_start is called in Round 0. 00:05:05.803 Shutdown signal received, stop current app iteration 00:05:05.803 Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 reinitialization... 00:05:05.803 spdk_app_start is called in Round 1. 00:05:05.803 Shutdown signal received, stop current app iteration 00:05:05.803 Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 reinitialization... 00:05:05.803 spdk_app_start is called in Round 2. 00:05:05.803 Shutdown signal received, stop current app iteration 00:05:05.803 Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 reinitialization... 00:05:05.803 spdk_app_start is called in Round 3. 00:05:05.803 Shutdown signal received, stop current app iteration 00:05:05.803 10:33:33 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:05.803 10:33:33 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:05.803 00:05:05.803 real 0m16.173s 00:05:05.803 user 0m34.938s 00:05:05.803 sys 0m2.987s 00:05:05.803 10:33:33 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:05.803 10:33:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:05.803 ************************************ 00:05:05.803 END TEST app_repeat 00:05:05.803 ************************************ 00:05:05.803 10:33:33 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:05.803 10:33:33 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:05.803 10:33:33 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:05.803 10:33:33 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:05.803 10:33:33 event -- common/autotest_common.sh@10 -- # set +x 00:05:05.803 ************************************ 00:05:05.803 START TEST cpu_locks 00:05:05.803 ************************************ 00:05:05.803 10:33:33 event.cpu_locks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:05.803 * Looking for test storage... 00:05:05.803 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:05:05.803 10:33:33 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:05.803 10:33:33 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:05:05.803 10:33:33 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:05.803 10:33:33 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:05.803 10:33:33 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.803 10:33:33 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.803 10:33:33 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.803 10:33:33 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.803 10:33:33 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.803 10:33:33 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.803 10:33:33 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.803 10:33:33 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.803 10:33:33 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.803 10:33:33 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.803 10:33:33 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.803 10:33:33 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:05.803 10:33:33 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:05.803 10:33:33 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.803 10:33:33 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.803 10:33:33 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:05.803 10:33:33 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:05.803 10:33:33 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.803 10:33:33 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:05.803 10:33:33 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.803 10:33:33 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:05.803 10:33:33 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:05.803 10:33:33 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.803 10:33:33 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:05.803 10:33:33 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.803 10:33:33 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.803 10:33:33 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.803 10:33:33 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:05.803 10:33:33 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.803 10:33:33 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:05.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.803 --rc genhtml_branch_coverage=1 00:05:05.803 --rc genhtml_function_coverage=1 00:05:05.803 --rc genhtml_legend=1 00:05:05.803 --rc geninfo_all_blocks=1 00:05:05.803 --rc geninfo_unexecuted_blocks=1 00:05:05.803 00:05:05.803 ' 00:05:05.803 10:33:33 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:05.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.803 --rc genhtml_branch_coverage=1 00:05:05.803 --rc genhtml_function_coverage=1 00:05:05.803 --rc genhtml_legend=1 00:05:05.803 --rc geninfo_all_blocks=1 00:05:05.803 --rc geninfo_unexecuted_blocks=1 00:05:05.803 00:05:05.803 ' 00:05:05.803 10:33:33 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:05.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.803 --rc genhtml_branch_coverage=1 00:05:05.803 --rc genhtml_function_coverage=1 00:05:05.803 --rc genhtml_legend=1 00:05:05.803 --rc geninfo_all_blocks=1 00:05:05.803 --rc geninfo_unexecuted_blocks=1 00:05:05.803 00:05:05.803 ' 00:05:05.803 10:33:33 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:05.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.803 --rc genhtml_branch_coverage=1 00:05:05.803 --rc genhtml_function_coverage=1 00:05:05.803 --rc genhtml_legend=1 00:05:05.803 --rc geninfo_all_blocks=1 00:05:05.803 --rc geninfo_unexecuted_blocks=1 00:05:05.803 00:05:05.803 ' 00:05:05.803 10:33:33 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:05.803 10:33:33 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:05.803 10:33:33 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:05.803 10:33:33 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:05.803 10:33:33 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:05.803 10:33:33 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:05.803 10:33:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:05.803 ************************************ 00:05:05.803 START TEST default_locks 00:05:05.803 ************************************ 00:05:05.803 10:33:33 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:05:05.803 10:33:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3611745 00:05:05.803 10:33:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3611745 00:05:05.803 10:33:33 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 3611745 ']' 00:05:05.803 10:33:33 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.803 10:33:33 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:05.803 10:33:33 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.803 10:33:33 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:05.803 10:33:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:05.803 10:33:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:06.063 [2024-11-07 10:33:33.498730] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:05:06.063 [2024-11-07 10:33:33.498784] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3611745 ] 00:05:06.063 [2024-11-07 10:33:33.572263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.063 [2024-11-07 10:33:33.612389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.322 10:33:33 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:06.322 10:33:33 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:05:06.322 10:33:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3611745 00:05:06.322 10:33:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3611745 00:05:06.322 10:33:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:06.581 lslocks: write error 00:05:06.581 10:33:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3611745 00:05:06.581 10:33:34 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 3611745 ']' 00:05:06.581 10:33:34 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 3611745 00:05:06.581 10:33:34 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:05:06.581 10:33:34 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:06.581 10:33:34 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3611745 00:05:06.581 10:33:34 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:06.581 10:33:34 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:06.581 10:33:34 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3611745' 00:05:06.581 killing process with pid 3611745 00:05:06.581 10:33:34 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 3611745 00:05:06.581 10:33:34 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 3611745 00:05:06.841 10:33:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3611745 00:05:06.841 10:33:34 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:06.841 10:33:34 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3611745 00:05:06.841 10:33:34 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:06.841 10:33:34 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:06.841 10:33:34 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:06.841 10:33:34 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:06.841 10:33:34 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 3611745 00:05:06.841 10:33:34 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 3611745 ']' 00:05:06.841 10:33:34 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.841 10:33:34 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:06.841 10:33:34 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.841 10:33:34 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:06.841 10:33:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:06.841 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (3611745) - No such process 00:05:06.842 ERROR: process (pid: 3611745) is no longer running 00:05:06.842 10:33:34 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:06.842 10:33:34 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:05:06.842 10:33:34 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:06.842 10:33:34 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:06.842 10:33:34 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:06.842 10:33:34 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:06.842 10:33:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:06.842 10:33:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:06.842 10:33:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:06.842 10:33:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:06.842 00:05:06.842 real 0m0.998s 00:05:06.842 user 0m0.945s 00:05:06.842 sys 0m0.474s 00:05:06.842 10:33:34 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:06.842 10:33:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:06.842 ************************************ 00:05:06.842 END TEST default_locks 00:05:06.842 ************************************ 00:05:06.842 10:33:34 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:06.842 10:33:34 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:06.842 10:33:34 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:06.842 10:33:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:06.842 ************************************ 00:05:06.842 START TEST default_locks_via_rpc 00:05:06.842 ************************************ 00:05:06.842 10:33:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:05:06.842 10:33:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3612023 00:05:06.842 10:33:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3612023 00:05:06.842 10:33:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3612023 ']' 00:05:06.842 10:33:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.842 10:33:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:06.842 10:33:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.842 10:33:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:06.842 10:33:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.842 10:33:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:07.102 [2024-11-07 10:33:34.540074] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:05:07.102 [2024-11-07 10:33:34.540129] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3612023 ] 00:05:07.102 [2024-11-07 10:33:34.613562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.102 [2024-11-07 10:33:34.653652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.361 10:33:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:07.361 10:33:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:07.361 10:33:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:07.361 10:33:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.361 10:33:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.361 10:33:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.361 10:33:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:07.361 10:33:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:07.361 10:33:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:07.361 10:33:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:07.361 10:33:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:07.361 10:33:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.361 10:33:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.361 10:33:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.362 10:33:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3612023 00:05:07.362 10:33:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3612023 00:05:07.362 10:33:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:07.930 10:33:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3612023 00:05:07.930 10:33:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 3612023 ']' 00:05:07.930 10:33:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 3612023 00:05:07.930 10:33:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:05:07.930 10:33:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:07.930 10:33:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3612023 00:05:07.930 10:33:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:07.930 10:33:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:07.930 10:33:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3612023' 00:05:07.930 killing process with pid 3612023 00:05:07.930 10:33:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 3612023 00:05:07.930 10:33:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 3612023 00:05:08.190 00:05:08.190 real 0m1.297s 00:05:08.190 user 0m1.282s 00:05:08.190 sys 0m0.587s 00:05:08.190 10:33:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:08.190 10:33:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.190 ************************************ 00:05:08.190 END TEST default_locks_via_rpc 00:05:08.190 ************************************ 00:05:08.190 10:33:35 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:08.190 10:33:35 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:08.190 10:33:35 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:08.190 10:33:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:08.190 ************************************ 00:05:08.190 START TEST non_locking_app_on_locked_coremask 00:05:08.190 ************************************ 00:05:08.190 10:33:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:05:08.190 10:33:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3612310 00:05:08.191 10:33:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3612310 /var/tmp/spdk.sock 00:05:08.191 10:33:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3612310 ']' 00:05:08.191 10:33:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.191 10:33:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:08.191 10:33:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.191 10:33:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:08.191 10:33:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:08.191 10:33:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:08.450 [2024-11-07 10:33:35.883916] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:05:08.450 [2024-11-07 10:33:35.883974] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3612310 ] 00:05:08.450 [2024-11-07 10:33:35.957625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.450 [2024-11-07 10:33:35.997655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.710 10:33:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:08.710 10:33:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:08.710 10:33:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3612314 00:05:08.710 10:33:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3612314 /var/tmp/spdk2.sock 00:05:08.710 10:33:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3612314 ']' 00:05:08.710 10:33:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:08.710 10:33:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:08.710 10:33:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:08.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:08.710 10:33:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:08.710 10:33:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:08.710 10:33:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:08.710 [2024-11-07 10:33:36.254803] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:05:08.710 [2024-11-07 10:33:36.254857] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3612314 ] 00:05:08.710 [2024-11-07 10:33:36.363113] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:08.710 [2024-11-07 10:33:36.363136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.969 [2024-11-07 10:33:36.442536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.535 10:33:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:09.535 10:33:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:09.535 10:33:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3612310 00:05:09.535 10:33:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3612310 00:05:09.535 10:33:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:10.911 lslocks: write error 00:05:10.911 10:33:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3612310 00:05:10.911 10:33:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3612310 ']' 00:05:10.911 10:33:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 3612310 00:05:10.911 10:33:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:10.911 10:33:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:10.911 10:33:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3612310 00:05:10.911 10:33:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:10.911 10:33:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:10.911 10:33:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3612310' 00:05:10.911 killing process with pid 3612310 00:05:10.911 10:33:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 3612310 00:05:10.911 10:33:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 3612310 00:05:11.478 10:33:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3612314 00:05:11.478 10:33:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3612314 ']' 00:05:11.478 10:33:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 3612314 00:05:11.478 10:33:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:11.478 10:33:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:11.478 10:33:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3612314 00:05:11.478 10:33:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:11.478 10:33:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:11.478 10:33:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3612314' 00:05:11.478 killing process with pid 3612314 00:05:11.478 10:33:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 3612314 00:05:11.478 10:33:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 3612314 00:05:12.046 00:05:12.046 real 0m3.600s 00:05:12.046 user 0m3.801s 00:05:12.046 sys 0m1.373s 00:05:12.046 10:33:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:12.046 10:33:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:12.046 ************************************ 00:05:12.046 END TEST non_locking_app_on_locked_coremask 00:05:12.046 ************************************ 00:05:12.046 10:33:39 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:12.046 10:33:39 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:12.046 10:33:39 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:12.046 10:33:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.046 ************************************ 00:05:12.046 START TEST locking_app_on_unlocked_coremask 00:05:12.046 ************************************ 00:05:12.046 10:33:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:05:12.046 10:33:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3612869 00:05:12.046 10:33:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3612869 /var/tmp/spdk.sock 00:05:12.046 10:33:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3612869 ']' 00:05:12.046 10:33:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.046 10:33:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:12.046 10:33:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.046 10:33:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:12.046 10:33:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:12.046 10:33:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:12.046 [2024-11-07 10:33:39.527924] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:05:12.046 [2024-11-07 10:33:39.527979] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3612869 ] 00:05:12.046 [2024-11-07 10:33:39.602794] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:12.046 [2024-11-07 10:33:39.602822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.046 [2024-11-07 10:33:39.642786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.305 10:33:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:12.305 10:33:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:12.305 10:33:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3612899 00:05:12.305 10:33:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3612899 /var/tmp/spdk2.sock 00:05:12.305 10:33:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3612899 ']' 00:05:12.305 10:33:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:12.305 10:33:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:12.305 10:33:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:12.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:12.305 10:33:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:12.305 10:33:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:12.305 10:33:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:12.305 [2024-11-07 10:33:39.902223] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:05:12.305 [2024-11-07 10:33:39.902279] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3612899 ] 00:05:12.563 [2024-11-07 10:33:40.009575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.563 [2024-11-07 10:33:40.097182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.129 10:33:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:13.129 10:33:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:13.129 10:33:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3612899 00:05:13.129 10:33:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:13.129 10:33:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3612899 00:05:14.063 lslocks: write error 00:05:14.063 10:33:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3612869 00:05:14.063 10:33:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3612869 ']' 00:05:14.063 10:33:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 3612869 00:05:14.063 10:33:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:14.063 10:33:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:14.063 10:33:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3612869 00:05:14.063 10:33:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:14.063 10:33:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:14.063 10:33:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3612869' 00:05:14.063 killing process with pid 3612869 00:05:14.063 10:33:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 3612869 00:05:14.063 10:33:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 3612869 00:05:14.629 10:33:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3612899 00:05:14.629 10:33:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3612899 ']' 00:05:14.629 10:33:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 3612899 00:05:14.629 10:33:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:14.629 10:33:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:14.629 10:33:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3612899 00:05:14.629 10:33:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:14.629 10:33:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:14.629 10:33:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3612899' 00:05:14.629 killing process with pid 3612899 00:05:14.629 10:33:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 3612899 00:05:14.629 10:33:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 3612899 00:05:15.196 00:05:15.196 real 0m3.110s 00:05:15.196 user 0m3.318s 00:05:15.196 sys 0m1.160s 00:05:15.196 10:33:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:15.196 10:33:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.196 ************************************ 00:05:15.196 END TEST locking_app_on_unlocked_coremask 00:05:15.196 ************************************ 00:05:15.196 10:33:42 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:15.196 10:33:42 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:15.196 10:33:42 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:15.196 10:33:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.196 ************************************ 00:05:15.196 START TEST locking_app_on_locked_coremask 00:05:15.196 ************************************ 00:05:15.196 10:33:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:05:15.196 10:33:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3613428 00:05:15.196 10:33:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3613428 /var/tmp/spdk.sock 00:05:15.196 10:33:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3613428 ']' 00:05:15.196 10:33:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.196 10:33:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:15.196 10:33:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.196 10:33:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:15.196 10:33:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.196 10:33:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:15.196 [2024-11-07 10:33:42.687721] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:05:15.196 [2024-11-07 10:33:42.687778] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3613428 ] 00:05:15.196 [2024-11-07 10:33:42.762475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.196 [2024-11-07 10:33:42.802595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.455 10:33:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:15.455 10:33:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:15.455 10:33:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3613492 00:05:15.455 10:33:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3613492 /var/tmp/spdk2.sock 00:05:15.455 10:33:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:15.455 10:33:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3613492 /var/tmp/spdk2.sock 00:05:15.455 10:33:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:15.455 10:33:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:15.455 10:33:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:15.455 10:33:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:15.455 10:33:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:15.455 10:33:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3613492 /var/tmp/spdk2.sock 00:05:15.455 10:33:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3613492 ']' 00:05:15.455 10:33:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:15.455 10:33:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:15.455 10:33:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:15.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:15.455 10:33:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:15.455 10:33:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.455 [2024-11-07 10:33:43.064559] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:05:15.455 [2024-11-07 10:33:43.064614] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3613492 ] 00:05:15.713 [2024-11-07 10:33:43.172141] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3613428 has claimed it. 00:05:15.713 [2024-11-07 10:33:43.172180] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:16.279 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (3613492) - No such process 00:05:16.279 ERROR: process (pid: 3613492) is no longer running 00:05:16.279 10:33:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:16.279 10:33:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:05:16.279 10:33:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:16.279 10:33:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:16.279 10:33:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:16.279 10:33:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:16.279 10:33:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3613428 00:05:16.280 10:33:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3613428 00:05:16.280 10:33:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:16.538 lslocks: write error 00:05:16.538 10:33:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3613428 00:05:16.538 10:33:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3613428 ']' 00:05:16.538 10:33:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 3613428 00:05:16.538 10:33:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:16.538 10:33:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:16.538 10:33:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3613428 00:05:16.538 10:33:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:16.538 10:33:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:16.538 10:33:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3613428' 00:05:16.538 killing process with pid 3613428 00:05:16.538 10:33:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 3613428 00:05:16.538 10:33:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 3613428 00:05:16.797 00:05:16.797 real 0m1.759s 00:05:16.797 user 0m1.866s 00:05:16.797 sys 0m0.630s 00:05:16.797 10:33:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:16.797 10:33:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:16.797 ************************************ 00:05:16.797 END TEST locking_app_on_locked_coremask 00:05:16.797 ************************************ 00:05:16.797 10:33:44 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:16.797 10:33:44 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:16.797 10:33:44 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:16.797 10:33:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:16.797 ************************************ 00:05:16.797 START TEST locking_overlapped_coremask 00:05:16.797 ************************************ 00:05:16.797 10:33:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:05:16.797 10:33:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3613728 00:05:16.797 10:33:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3613728 /var/tmp/spdk.sock 00:05:16.797 10:33:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:16.797 10:33:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 3613728 ']' 00:05:16.797 10:33:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.797 10:33:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:16.797 10:33:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.797 10:33:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:16.797 10:33:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:17.057 [2024-11-07 10:33:44.498566] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:05:17.057 [2024-11-07 10:33:44.498626] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3613728 ] 00:05:17.057 [2024-11-07 10:33:44.572899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:17.057 [2024-11-07 10:33:44.616766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.057 [2024-11-07 10:33:44.616860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:17.057 [2024-11-07 10:33:44.616860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.317 10:33:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:17.317 10:33:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:17.317 10:33:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3613914 00:05:17.317 10:33:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3613914 /var/tmp/spdk2.sock 00:05:17.317 10:33:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:17.317 10:33:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:17.317 10:33:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3613914 /var/tmp/spdk2.sock 00:05:17.317 10:33:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:17.317 10:33:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:17.317 10:33:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:17.317 10:33:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:17.317 10:33:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3613914 /var/tmp/spdk2.sock 00:05:17.317 10:33:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 3613914 ']' 00:05:17.317 10:33:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:17.317 10:33:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:17.317 10:33:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:17.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:17.317 10:33:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:17.317 10:33:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:17.317 [2024-11-07 10:33:44.883626] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:05:17.317 [2024-11-07 10:33:44.883678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3613914 ] 00:05:17.576 [2024-11-07 10:33:45.001827] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3613728 has claimed it. 00:05:17.577 [2024-11-07 10:33:45.001867] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:18.146 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (3613914) - No such process 00:05:18.146 ERROR: process (pid: 3613914) is no longer running 00:05:18.146 10:33:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:18.146 10:33:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:05:18.146 10:33:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:18.146 10:33:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:18.146 10:33:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:18.146 10:33:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:18.146 10:33:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:18.146 10:33:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:18.146 10:33:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:18.146 10:33:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:18.146 10:33:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3613728 00:05:18.146 10:33:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 3613728 ']' 00:05:18.146 10:33:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 3613728 00:05:18.146 10:33:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:05:18.146 10:33:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:18.146 10:33:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3613728 00:05:18.146 10:33:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:18.146 10:33:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:18.146 10:33:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3613728' 00:05:18.146 killing process with pid 3613728 00:05:18.146 10:33:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 3613728 00:05:18.146 10:33:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 3613728 00:05:18.406 00:05:18.406 real 0m1.448s 00:05:18.406 user 0m3.962s 00:05:18.406 sys 0m0.443s 00:05:18.406 10:33:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:18.406 10:33:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.406 ************************************ 00:05:18.406 END TEST locking_overlapped_coremask 00:05:18.406 ************************************ 00:05:18.406 10:33:45 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:18.406 10:33:45 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:18.406 10:33:45 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:18.406 10:33:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.406 ************************************ 00:05:18.406 START TEST locking_overlapped_coremask_via_rpc 00:05:18.406 ************************************ 00:05:18.406 10:33:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:05:18.406 10:33:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3614015 00:05:18.406 10:33:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3614015 /var/tmp/spdk.sock 00:05:18.406 10:33:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:18.406 10:33:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3614015 ']' 00:05:18.406 10:33:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.406 10:33:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:18.406 10:33:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.406 10:33:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:18.406 10:33:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.406 [2024-11-07 10:33:45.997951] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:05:18.406 [2024-11-07 10:33:45.998005] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3614015 ] 00:05:18.406 [2024-11-07 10:33:46.073356] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:18.406 [2024-11-07 10:33:46.073387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:18.665 [2024-11-07 10:33:46.113853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.665 [2024-11-07 10:33:46.113947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:18.665 [2024-11-07 10:33:46.113951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.665 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:18.665 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:18.665 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3614181 00:05:18.665 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3614181 /var/tmp/spdk2.sock 00:05:18.665 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:18.665 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3614181 ']' 00:05:18.665 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:18.665 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:18.665 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:18.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:18.665 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:18.665 10:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.925 [2024-11-07 10:33:46.378736] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:05:18.925 [2024-11-07 10:33:46.378788] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3614181 ] 00:05:18.925 [2024-11-07 10:33:46.488233] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:18.925 [2024-11-07 10:33:46.488268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:18.925 [2024-11-07 10:33:46.574147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:18.925 [2024-11-07 10:33:46.577556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:18.925 [2024-11-07 10:33:46.577558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:19.863 10:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:19.863 10:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:19.863 10:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:19.863 10:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.864 10:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.864 10:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.864 10:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:19.864 10:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:19.864 10:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:19.864 10:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:19.864 10:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:19.864 10:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:19.864 10:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:19.864 10:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:19.864 10:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.864 10:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.864 [2024-11-07 10:33:47.243588] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3614015 has claimed it. 00:05:19.864 request: 00:05:19.864 { 00:05:19.864 "method": "framework_enable_cpumask_locks", 00:05:19.864 "req_id": 1 00:05:19.864 } 00:05:19.864 Got JSON-RPC error response 00:05:19.864 response: 00:05:19.864 { 00:05:19.864 "code": -32603, 00:05:19.864 "message": "Failed to claim CPU core: 2" 00:05:19.864 } 00:05:19.864 10:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:19.864 10:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:19.864 10:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:19.864 10:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:19.864 10:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:19.864 10:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3614015 /var/tmp/spdk.sock 00:05:19.864 10:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3614015 ']' 00:05:19.864 10:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.864 10:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:19.864 10:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.864 10:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:19.864 10:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.864 10:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:19.864 10:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:19.864 10:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3614181 /var/tmp/spdk2.sock 00:05:19.864 10:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3614181 ']' 00:05:19.864 10:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:19.864 10:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:19.864 10:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:19.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:19.864 10:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:19.864 10:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.124 10:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:20.124 10:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:20.124 10:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:20.124 10:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:20.124 10:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:20.124 10:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:20.124 00:05:20.124 real 0m1.708s 00:05:20.124 user 0m0.771s 00:05:20.124 sys 0m0.178s 00:05:20.124 10:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:20.124 10:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.124 ************************************ 00:05:20.124 END TEST locking_overlapped_coremask_via_rpc 00:05:20.124 ************************************ 00:05:20.124 10:33:47 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:20.124 10:33:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3614015 ]] 00:05:20.124 10:33:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3614015 00:05:20.124 10:33:47 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3614015 ']' 00:05:20.124 10:33:47 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3614015 00:05:20.124 10:33:47 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:05:20.124 10:33:47 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:20.124 10:33:47 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3614015 00:05:20.124 10:33:47 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:20.124 10:33:47 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:20.124 10:33:47 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3614015' 00:05:20.124 killing process with pid 3614015 00:05:20.124 10:33:47 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 3614015 00:05:20.124 10:33:47 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 3614015 00:05:20.693 10:33:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3614181 ]] 00:05:20.693 10:33:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3614181 00:05:20.693 10:33:48 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3614181 ']' 00:05:20.693 10:33:48 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3614181 00:05:20.693 10:33:48 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:05:20.693 10:33:48 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:20.693 10:33:48 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3614181 00:05:20.693 10:33:48 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:05:20.693 10:33:48 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:05:20.693 10:33:48 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3614181' 00:05:20.693 killing process with pid 3614181 00:05:20.693 10:33:48 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 3614181 00:05:20.693 10:33:48 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 3614181 00:05:20.952 10:33:48 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:20.952 10:33:48 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:20.952 10:33:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3614015 ]] 00:05:20.952 10:33:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3614015 00:05:20.952 10:33:48 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3614015 ']' 00:05:20.952 10:33:48 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3614015 00:05:20.952 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3614015) - No such process 00:05:20.952 10:33:48 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 3614015 is not found' 00:05:20.952 Process with pid 3614015 is not found 00:05:20.952 10:33:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3614181 ]] 00:05:20.952 10:33:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3614181 00:05:20.952 10:33:48 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3614181 ']' 00:05:20.952 10:33:48 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3614181 00:05:20.952 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3614181) - No such process 00:05:20.952 10:33:48 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 3614181 is not found' 00:05:20.952 Process with pid 3614181 is not found 00:05:20.952 10:33:48 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:20.952 00:05:20.952 real 0m15.196s 00:05:20.952 user 0m25.664s 00:05:20.952 sys 0m5.834s 00:05:20.952 10:33:48 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:20.952 10:33:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:20.952 ************************************ 00:05:20.952 END TEST cpu_locks 00:05:20.952 ************************************ 00:05:20.952 00:05:20.952 real 0m39.528s 00:05:20.952 user 1m13.689s 00:05:20.952 sys 0m9.807s 00:05:20.952 10:33:48 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:20.952 10:33:48 event -- common/autotest_common.sh@10 -- # set +x 00:05:20.952 ************************************ 00:05:20.952 END TEST event 00:05:20.952 ************************************ 00:05:20.952 10:33:48 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:05:20.952 10:33:48 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:20.952 10:33:48 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:20.952 10:33:48 -- common/autotest_common.sh@10 -- # set +x 00:05:20.952 ************************************ 00:05:20.952 START TEST thread 00:05:20.952 ************************************ 00:05:20.952 10:33:48 thread -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:05:21.212 * Looking for test storage... 00:05:21.212 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:05:21.212 10:33:48 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:21.212 10:33:48 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:05:21.212 10:33:48 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:21.212 10:33:48 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:21.212 10:33:48 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.212 10:33:48 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.212 10:33:48 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.212 10:33:48 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.212 10:33:48 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.212 10:33:48 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.212 10:33:48 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.212 10:33:48 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.212 10:33:48 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.212 10:33:48 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.212 10:33:48 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.212 10:33:48 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:21.212 10:33:48 thread -- scripts/common.sh@345 -- # : 1 00:05:21.212 10:33:48 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.212 10:33:48 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.212 10:33:48 thread -- scripts/common.sh@365 -- # decimal 1 00:05:21.212 10:33:48 thread -- scripts/common.sh@353 -- # local d=1 00:05:21.212 10:33:48 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.212 10:33:48 thread -- scripts/common.sh@355 -- # echo 1 00:05:21.212 10:33:48 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.212 10:33:48 thread -- scripts/common.sh@366 -- # decimal 2 00:05:21.212 10:33:48 thread -- scripts/common.sh@353 -- # local d=2 00:05:21.212 10:33:48 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.212 10:33:48 thread -- scripts/common.sh@355 -- # echo 2 00:05:21.212 10:33:48 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.212 10:33:48 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.212 10:33:48 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.212 10:33:48 thread -- scripts/common.sh@368 -- # return 0 00:05:21.212 10:33:48 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.212 10:33:48 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:21.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.212 --rc genhtml_branch_coverage=1 00:05:21.212 --rc genhtml_function_coverage=1 00:05:21.212 --rc genhtml_legend=1 00:05:21.212 --rc geninfo_all_blocks=1 00:05:21.212 --rc geninfo_unexecuted_blocks=1 00:05:21.212 00:05:21.212 ' 00:05:21.212 10:33:48 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:21.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.212 --rc genhtml_branch_coverage=1 00:05:21.212 --rc genhtml_function_coverage=1 00:05:21.212 --rc genhtml_legend=1 00:05:21.212 --rc geninfo_all_blocks=1 00:05:21.212 --rc geninfo_unexecuted_blocks=1 00:05:21.212 00:05:21.212 ' 00:05:21.212 10:33:48 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:21.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.212 --rc genhtml_branch_coverage=1 00:05:21.212 --rc genhtml_function_coverage=1 00:05:21.212 --rc genhtml_legend=1 00:05:21.212 --rc geninfo_all_blocks=1 00:05:21.212 --rc geninfo_unexecuted_blocks=1 00:05:21.212 00:05:21.212 ' 00:05:21.212 10:33:48 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:21.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.212 --rc genhtml_branch_coverage=1 00:05:21.212 --rc genhtml_function_coverage=1 00:05:21.212 --rc genhtml_legend=1 00:05:21.212 --rc geninfo_all_blocks=1 00:05:21.212 --rc geninfo_unexecuted_blocks=1 00:05:21.212 00:05:21.212 ' 00:05:21.212 10:33:48 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:21.212 10:33:48 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:21.212 10:33:48 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:21.212 10:33:48 thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.212 ************************************ 00:05:21.212 START TEST thread_poller_perf 00:05:21.212 ************************************ 00:05:21.212 10:33:48 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:21.212 [2024-11-07 10:33:48.765464] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:05:21.212 [2024-11-07 10:33:48.765566] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3614633 ] 00:05:21.212 [2024-11-07 10:33:48.842236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.212 [2024-11-07 10:33:48.880225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.212 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:22.590 [2024-11-07T09:33:50.261Z] ====================================== 00:05:22.590 [2024-11-07T09:33:50.261Z] busy:2506275664 (cyc) 00:05:22.590 [2024-11-07T09:33:50.261Z] total_run_count: 429000 00:05:22.590 [2024-11-07T09:33:50.261Z] tsc_hz: 2500000000 (cyc) 00:05:22.590 [2024-11-07T09:33:50.261Z] ====================================== 00:05:22.590 [2024-11-07T09:33:50.261Z] poller_cost: 5842 (cyc), 2336 (nsec) 00:05:22.590 00:05:22.590 real 0m1.179s 00:05:22.590 user 0m1.101s 00:05:22.590 sys 0m0.074s 00:05:22.590 10:33:49 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:22.590 10:33:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:22.590 ************************************ 00:05:22.590 END TEST thread_poller_perf 00:05:22.590 ************************************ 00:05:22.590 10:33:49 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:22.590 10:33:49 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:22.590 10:33:49 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:22.590 10:33:49 thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.590 ************************************ 00:05:22.590 START TEST thread_poller_perf 00:05:22.590 ************************************ 00:05:22.590 10:33:49 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:22.590 [2024-11-07 10:33:49.994159] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:05:22.590 [2024-11-07 10:33:49.994229] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3614912 ] 00:05:22.590 [2024-11-07 10:33:50.074969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.590 [2024-11-07 10:33:50.116954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.590 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:23.529 [2024-11-07T09:33:51.200Z] ====================================== 00:05:23.529 [2024-11-07T09:33:51.200Z] busy:2501821774 (cyc) 00:05:23.529 [2024-11-07T09:33:51.200Z] total_run_count: 5449000 00:05:23.529 [2024-11-07T09:33:51.200Z] tsc_hz: 2500000000 (cyc) 00:05:23.529 [2024-11-07T09:33:51.200Z] ====================================== 00:05:23.529 [2024-11-07T09:33:51.200Z] poller_cost: 459 (cyc), 183 (nsec) 00:05:23.529 00:05:23.529 real 0m1.183s 00:05:23.529 user 0m1.100s 00:05:23.529 sys 0m0.078s 00:05:23.529 10:33:51 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:23.529 10:33:51 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:23.529 ************************************ 00:05:23.529 END TEST thread_poller_perf 00:05:23.529 ************************************ 00:05:23.529 10:33:51 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:23.529 00:05:23.529 real 0m2.655s 00:05:23.529 user 0m2.350s 00:05:23.529 sys 0m0.327s 00:05:23.529 10:33:51 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:23.529 10:33:51 thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.529 ************************************ 00:05:23.529 END TEST thread 00:05:23.529 ************************************ 00:05:23.801 10:33:51 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:23.801 10:33:51 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:05:23.801 10:33:51 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:23.801 10:33:51 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:23.801 10:33:51 -- common/autotest_common.sh@10 -- # set +x 00:05:23.801 ************************************ 00:05:23.801 START TEST app_cmdline 00:05:23.801 ************************************ 00:05:23.801 10:33:51 app_cmdline -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:05:23.801 * Looking for test storage... 00:05:23.801 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:05:23.801 10:33:51 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:23.801 10:33:51 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:05:23.801 10:33:51 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:23.801 10:33:51 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:23.801 10:33:51 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.801 10:33:51 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.801 10:33:51 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.801 10:33:51 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.801 10:33:51 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.801 10:33:51 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.801 10:33:51 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.801 10:33:51 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.801 10:33:51 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.801 10:33:51 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.801 10:33:51 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.801 10:33:51 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:23.801 10:33:51 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:23.801 10:33:51 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.801 10:33:51 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.801 10:33:51 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:23.801 10:33:51 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:23.801 10:33:51 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.801 10:33:51 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:23.801 10:33:51 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.801 10:33:51 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:23.801 10:33:51 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:23.801 10:33:51 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.801 10:33:51 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:23.801 10:33:51 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.801 10:33:51 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.801 10:33:51 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.801 10:33:51 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:23.801 10:33:51 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.801 10:33:51 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:23.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.801 --rc genhtml_branch_coverage=1 00:05:23.801 --rc genhtml_function_coverage=1 00:05:23.801 --rc genhtml_legend=1 00:05:23.801 --rc geninfo_all_blocks=1 00:05:23.801 --rc geninfo_unexecuted_blocks=1 00:05:23.801 00:05:23.801 ' 00:05:23.801 10:33:51 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:23.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.801 --rc genhtml_branch_coverage=1 00:05:23.801 --rc genhtml_function_coverage=1 00:05:23.801 --rc genhtml_legend=1 00:05:23.801 --rc geninfo_all_blocks=1 00:05:23.801 --rc geninfo_unexecuted_blocks=1 00:05:23.801 00:05:23.801 ' 00:05:23.801 10:33:51 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:23.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.801 --rc genhtml_branch_coverage=1 00:05:23.801 --rc genhtml_function_coverage=1 00:05:23.801 --rc genhtml_legend=1 00:05:23.801 --rc geninfo_all_blocks=1 00:05:23.801 --rc geninfo_unexecuted_blocks=1 00:05:23.801 00:05:23.801 ' 00:05:23.801 10:33:51 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:23.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.801 --rc genhtml_branch_coverage=1 00:05:23.801 --rc genhtml_function_coverage=1 00:05:23.801 --rc genhtml_legend=1 00:05:23.801 --rc geninfo_all_blocks=1 00:05:23.801 --rc geninfo_unexecuted_blocks=1 00:05:23.801 00:05:23.801 ' 00:05:23.801 10:33:51 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:23.801 10:33:51 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3615238 00:05:23.801 10:33:51 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3615238 00:05:23.801 10:33:51 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 3615238 ']' 00:05:23.801 10:33:51 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.801 10:33:51 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:23.801 10:33:51 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.801 10:33:51 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:23.801 10:33:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:23.801 10:33:51 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:24.062 [2024-11-07 10:33:51.478596] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:05:24.062 [2024-11-07 10:33:51.478651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3615238 ] 00:05:24.062 [2024-11-07 10:33:51.552697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.062 [2024-11-07 10:33:51.591107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.321 10:33:51 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:24.321 10:33:51 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:05:24.321 10:33:51 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:24.321 { 00:05:24.321 "version": "SPDK v25.01-pre git sha1 899af6c35", 00:05:24.321 "fields": { 00:05:24.321 "major": 25, 00:05:24.321 "minor": 1, 00:05:24.321 "patch": 0, 00:05:24.321 "suffix": "-pre", 00:05:24.321 "commit": "899af6c35" 00:05:24.321 } 00:05:24.321 } 00:05:24.321 10:33:51 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:24.321 10:33:51 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:24.321 10:33:51 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:24.321 10:33:51 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:24.321 10:33:51 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:24.321 10:33:51 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:24.321 10:33:51 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.321 10:33:51 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:24.321 10:33:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:24.581 10:33:51 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.581 10:33:52 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:24.581 10:33:52 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:24.581 10:33:52 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:24.581 10:33:52 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:24.581 10:33:52 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:24.581 10:33:52 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:05:24.581 10:33:52 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:24.581 10:33:52 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:05:24.581 10:33:52 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:24.581 10:33:52 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:05:24.581 10:33:52 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:24.581 10:33:52 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:05:24.581 10:33:52 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:05:24.581 10:33:52 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:24.581 request: 00:05:24.581 { 00:05:24.581 "method": "env_dpdk_get_mem_stats", 00:05:24.581 "req_id": 1 00:05:24.581 } 00:05:24.581 Got JSON-RPC error response 00:05:24.581 response: 00:05:24.581 { 00:05:24.581 "code": -32601, 00:05:24.581 "message": "Method not found" 00:05:24.581 } 00:05:24.581 10:33:52 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:24.581 10:33:52 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:24.581 10:33:52 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:24.581 10:33:52 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:24.581 10:33:52 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3615238 00:05:24.581 10:33:52 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 3615238 ']' 00:05:24.581 10:33:52 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 3615238 00:05:24.581 10:33:52 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:05:24.581 10:33:52 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:24.581 10:33:52 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3615238 00:05:24.841 10:33:52 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:24.841 10:33:52 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:24.841 10:33:52 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3615238' 00:05:24.841 killing process with pid 3615238 00:05:24.841 10:33:52 app_cmdline -- common/autotest_common.sh@971 -- # kill 3615238 00:05:24.841 10:33:52 app_cmdline -- common/autotest_common.sh@976 -- # wait 3615238 00:05:25.100 00:05:25.100 real 0m1.344s 00:05:25.100 user 0m1.498s 00:05:25.100 sys 0m0.518s 00:05:25.100 10:33:52 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:25.100 10:33:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:25.100 ************************************ 00:05:25.100 END TEST app_cmdline 00:05:25.100 ************************************ 00:05:25.100 10:33:52 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:05:25.100 10:33:52 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:25.100 10:33:52 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:25.100 10:33:52 -- common/autotest_common.sh@10 -- # set +x 00:05:25.100 ************************************ 00:05:25.100 START TEST version 00:05:25.100 ************************************ 00:05:25.100 10:33:52 version -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:05:25.100 * Looking for test storage... 00:05:25.100 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:05:25.100 10:33:52 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:25.100 10:33:52 version -- common/autotest_common.sh@1691 -- # lcov --version 00:05:25.100 10:33:52 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:25.361 10:33:52 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:25.361 10:33:52 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:25.361 10:33:52 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:25.361 10:33:52 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:25.361 10:33:52 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.361 10:33:52 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:25.361 10:33:52 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:25.361 10:33:52 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:25.361 10:33:52 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:25.361 10:33:52 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:25.361 10:33:52 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:25.361 10:33:52 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:25.361 10:33:52 version -- scripts/common.sh@344 -- # case "$op" in 00:05:25.361 10:33:52 version -- scripts/common.sh@345 -- # : 1 00:05:25.361 10:33:52 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:25.361 10:33:52 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.361 10:33:52 version -- scripts/common.sh@365 -- # decimal 1 00:05:25.361 10:33:52 version -- scripts/common.sh@353 -- # local d=1 00:05:25.361 10:33:52 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.361 10:33:52 version -- scripts/common.sh@355 -- # echo 1 00:05:25.361 10:33:52 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:25.361 10:33:52 version -- scripts/common.sh@366 -- # decimal 2 00:05:25.361 10:33:52 version -- scripts/common.sh@353 -- # local d=2 00:05:25.361 10:33:52 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.361 10:33:52 version -- scripts/common.sh@355 -- # echo 2 00:05:25.361 10:33:52 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:25.361 10:33:52 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:25.361 10:33:52 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:25.361 10:33:52 version -- scripts/common.sh@368 -- # return 0 00:05:25.361 10:33:52 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.361 10:33:52 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:25.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.361 --rc genhtml_branch_coverage=1 00:05:25.361 --rc genhtml_function_coverage=1 00:05:25.361 --rc genhtml_legend=1 00:05:25.361 --rc geninfo_all_blocks=1 00:05:25.361 --rc geninfo_unexecuted_blocks=1 00:05:25.361 00:05:25.361 ' 00:05:25.361 10:33:52 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:25.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.361 --rc genhtml_branch_coverage=1 00:05:25.361 --rc genhtml_function_coverage=1 00:05:25.361 --rc genhtml_legend=1 00:05:25.361 --rc geninfo_all_blocks=1 00:05:25.361 --rc geninfo_unexecuted_blocks=1 00:05:25.361 00:05:25.361 ' 00:05:25.361 10:33:52 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:25.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.361 --rc genhtml_branch_coverage=1 00:05:25.361 --rc genhtml_function_coverage=1 00:05:25.361 --rc genhtml_legend=1 00:05:25.361 --rc geninfo_all_blocks=1 00:05:25.361 --rc geninfo_unexecuted_blocks=1 00:05:25.361 00:05:25.361 ' 00:05:25.361 10:33:52 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:25.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.361 --rc genhtml_branch_coverage=1 00:05:25.361 --rc genhtml_function_coverage=1 00:05:25.361 --rc genhtml_legend=1 00:05:25.361 --rc geninfo_all_blocks=1 00:05:25.361 --rc geninfo_unexecuted_blocks=1 00:05:25.361 00:05:25.361 ' 00:05:25.361 10:33:52 version -- app/version.sh@17 -- # get_header_version major 00:05:25.361 10:33:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:05:25.361 10:33:52 version -- app/version.sh@14 -- # cut -f2 00:05:25.361 10:33:52 version -- app/version.sh@14 -- # tr -d '"' 00:05:25.361 10:33:52 version -- app/version.sh@17 -- # major=25 00:05:25.361 10:33:52 version -- app/version.sh@18 -- # get_header_version minor 00:05:25.361 10:33:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:05:25.361 10:33:52 version -- app/version.sh@14 -- # cut -f2 00:05:25.361 10:33:52 version -- app/version.sh@14 -- # tr -d '"' 00:05:25.361 10:33:52 version -- app/version.sh@18 -- # minor=1 00:05:25.361 10:33:52 version -- app/version.sh@19 -- # get_header_version patch 00:05:25.361 10:33:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:05:25.361 10:33:52 version -- app/version.sh@14 -- # cut -f2 00:05:25.361 10:33:52 version -- app/version.sh@14 -- # tr -d '"' 00:05:25.361 10:33:52 version -- app/version.sh@19 -- # patch=0 00:05:25.361 10:33:52 version -- app/version.sh@20 -- # get_header_version suffix 00:05:25.361 10:33:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:05:25.361 10:33:52 version -- app/version.sh@14 -- # cut -f2 00:05:25.361 10:33:52 version -- app/version.sh@14 -- # tr -d '"' 00:05:25.361 10:33:52 version -- app/version.sh@20 -- # suffix=-pre 00:05:25.361 10:33:52 version -- app/version.sh@22 -- # version=25.1 00:05:25.361 10:33:52 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:25.361 10:33:52 version -- app/version.sh@28 -- # version=25.1rc0 00:05:25.361 10:33:52 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:05:25.361 10:33:52 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:25.361 10:33:52 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:25.361 10:33:52 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:25.361 00:05:25.361 real 0m0.270s 00:05:25.361 user 0m0.148s 00:05:25.361 sys 0m0.175s 00:05:25.361 10:33:52 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:25.361 10:33:52 version -- common/autotest_common.sh@10 -- # set +x 00:05:25.361 ************************************ 00:05:25.361 END TEST version 00:05:25.361 ************************************ 00:05:25.361 10:33:52 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:25.361 10:33:52 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:25.361 10:33:52 -- spdk/autotest.sh@194 -- # uname -s 00:05:25.361 10:33:52 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:25.361 10:33:52 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:25.361 10:33:52 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:25.361 10:33:52 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:25.361 10:33:52 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:05:25.361 10:33:52 -- spdk/autotest.sh@256 -- # timing_exit lib 00:05:25.361 10:33:52 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:25.361 10:33:52 -- common/autotest_common.sh@10 -- # set +x 00:05:25.361 10:33:52 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:05:25.361 10:33:52 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:05:25.361 10:33:52 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:05:25.361 10:33:52 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:05:25.361 10:33:52 -- spdk/autotest.sh@276 -- # '[' rdma = rdma ']' 00:05:25.361 10:33:52 -- spdk/autotest.sh@277 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:05:25.361 10:33:52 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:25.361 10:33:52 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:25.361 10:33:52 -- common/autotest_common.sh@10 -- # set +x 00:05:25.361 ************************************ 00:05:25.361 START TEST nvmf_rdma 00:05:25.361 ************************************ 00:05:25.361 10:33:53 nvmf_rdma -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:05:25.621 * Looking for test storage... 00:05:25.621 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:05:25.621 10:33:53 nvmf_rdma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:25.621 10:33:53 nvmf_rdma -- common/autotest_common.sh@1691 -- # lcov --version 00:05:25.621 10:33:53 nvmf_rdma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:25.621 10:33:53 nvmf_rdma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:25.621 10:33:53 nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:25.621 10:33:53 nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:25.621 10:33:53 nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:25.621 10:33:53 nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.621 10:33:53 nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:05:25.621 10:33:53 nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:05:25.621 10:33:53 nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:05:25.621 10:33:53 nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:05:25.621 10:33:53 nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:05:25.621 10:33:53 nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:05:25.621 10:33:53 nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:25.621 10:33:53 nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:05:25.621 10:33:53 nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:05:25.621 10:33:53 nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:25.621 10:33:53 nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.621 10:33:53 nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:05:25.621 10:33:53 nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:05:25.621 10:33:53 nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.621 10:33:53 nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:05:25.621 10:33:53 nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:05:25.621 10:33:53 nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:05:25.621 10:33:53 nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:05:25.621 10:33:53 nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.621 10:33:53 nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:05:25.621 10:33:53 nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:05:25.621 10:33:53 nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:25.621 10:33:53 nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:25.622 10:33:53 nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:05:25.622 10:33:53 nvmf_rdma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.622 10:33:53 nvmf_rdma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:25.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.622 --rc genhtml_branch_coverage=1 00:05:25.622 --rc genhtml_function_coverage=1 00:05:25.622 --rc genhtml_legend=1 00:05:25.622 --rc geninfo_all_blocks=1 00:05:25.622 --rc geninfo_unexecuted_blocks=1 00:05:25.622 00:05:25.622 ' 00:05:25.622 10:33:53 nvmf_rdma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:25.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.622 --rc genhtml_branch_coverage=1 00:05:25.622 --rc genhtml_function_coverage=1 00:05:25.622 --rc genhtml_legend=1 00:05:25.622 --rc geninfo_all_blocks=1 00:05:25.622 --rc geninfo_unexecuted_blocks=1 00:05:25.622 00:05:25.622 ' 00:05:25.622 10:33:53 nvmf_rdma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:25.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.622 --rc genhtml_branch_coverage=1 00:05:25.622 --rc genhtml_function_coverage=1 00:05:25.622 --rc genhtml_legend=1 00:05:25.622 --rc geninfo_all_blocks=1 00:05:25.622 --rc geninfo_unexecuted_blocks=1 00:05:25.622 00:05:25.622 ' 00:05:25.622 10:33:53 nvmf_rdma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:25.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.622 --rc genhtml_branch_coverage=1 00:05:25.622 --rc genhtml_function_coverage=1 00:05:25.622 --rc genhtml_legend=1 00:05:25.622 --rc geninfo_all_blocks=1 00:05:25.622 --rc geninfo_unexecuted_blocks=1 00:05:25.622 00:05:25.622 ' 00:05:25.622 10:33:53 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s 00:05:25.622 10:33:53 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:25.622 10:33:53 nvmf_rdma -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:05:25.622 10:33:53 nvmf_rdma -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:25.622 10:33:53 nvmf_rdma -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:25.622 10:33:53 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:05:25.622 ************************************ 00:05:25.622 START TEST nvmf_target_core 00:05:25.622 ************************************ 00:05:25.622 10:33:53 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:05:25.882 * Looking for test storage... 00:05:25.882 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:25.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.883 --rc genhtml_branch_coverage=1 00:05:25.883 --rc genhtml_function_coverage=1 00:05:25.883 --rc genhtml_legend=1 00:05:25.883 --rc geninfo_all_blocks=1 00:05:25.883 --rc geninfo_unexecuted_blocks=1 00:05:25.883 00:05:25.883 ' 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:25.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.883 --rc genhtml_branch_coverage=1 00:05:25.883 --rc genhtml_function_coverage=1 00:05:25.883 --rc genhtml_legend=1 00:05:25.883 --rc geninfo_all_blocks=1 00:05:25.883 --rc geninfo_unexecuted_blocks=1 00:05:25.883 00:05:25.883 ' 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:25.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.883 --rc genhtml_branch_coverage=1 00:05:25.883 --rc genhtml_function_coverage=1 00:05:25.883 --rc genhtml_legend=1 00:05:25.883 --rc geninfo_all_blocks=1 00:05:25.883 --rc geninfo_unexecuted_blocks=1 00:05:25.883 00:05:25.883 ' 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:25.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.883 --rc genhtml_branch_coverage=1 00:05:25.883 --rc genhtml_function_coverage=1 00:05:25.883 --rc genhtml_legend=1 00:05:25.883 --rc geninfo_all_blocks=1 00:05:25.883 --rc geninfo_unexecuted_blocks=1 00:05:25.883 00:05:25.883 ' 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.883 10:33:53 nvmf_rdma.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.884 10:33:53 nvmf_rdma.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:25.884 10:33:53 nvmf_rdma.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.884 10:33:53 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:25.884 10:33:53 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:25.884 10:33:53 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:25.884 10:33:53 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:25.884 10:33:53 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:25.884 10:33:53 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:25.884 10:33:53 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:25.884 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:25.884 10:33:53 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:25.884 10:33:53 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:25.884 10:33:53 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:25.884 10:33:53 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:25.884 10:33:53 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:25.884 10:33:53 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:25.884 10:33:53 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:05:25.884 10:33:53 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:25.884 10:33:53 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:25.884 10:33:53 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:25.884 ************************************ 00:05:25.884 START TEST nvmf_abort 00:05:25.884 ************************************ 00:05:25.884 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:05:25.884 * Looking for test storage... 00:05:26.144 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:05:26.144 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:26.144 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:05:26.144 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:26.144 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:26.144 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.144 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.144 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.144 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.144 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.144 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.144 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.144 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.144 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.144 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.144 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.144 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:26.144 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:26.144 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.144 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.144 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:26.144 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:26.144 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.144 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:26.144 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.144 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:26.144 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:26.144 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.144 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:26.144 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.144 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.144 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.144 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:26.144 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.144 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:26.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.144 --rc genhtml_branch_coverage=1 00:05:26.144 --rc genhtml_function_coverage=1 00:05:26.144 --rc genhtml_legend=1 00:05:26.144 --rc geninfo_all_blocks=1 00:05:26.144 --rc geninfo_unexecuted_blocks=1 00:05:26.144 00:05:26.144 ' 00:05:26.144 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:26.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.144 --rc genhtml_branch_coverage=1 00:05:26.144 --rc genhtml_function_coverage=1 00:05:26.144 --rc genhtml_legend=1 00:05:26.144 --rc geninfo_all_blocks=1 00:05:26.144 --rc geninfo_unexecuted_blocks=1 00:05:26.144 00:05:26.144 ' 00:05:26.144 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:26.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.144 --rc genhtml_branch_coverage=1 00:05:26.144 --rc genhtml_function_coverage=1 00:05:26.144 --rc genhtml_legend=1 00:05:26.144 --rc geninfo_all_blocks=1 00:05:26.144 --rc geninfo_unexecuted_blocks=1 00:05:26.144 00:05:26.144 ' 00:05:26.144 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:26.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.144 --rc genhtml_branch_coverage=1 00:05:26.144 --rc genhtml_function_coverage=1 00:05:26.144 --rc genhtml_legend=1 00:05:26.144 --rc geninfo_all_blocks=1 00:05:26.144 --rc geninfo_unexecuted_blocks=1 00:05:26.144 00:05:26.144 ' 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:26.145 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:26.145 10:33:53 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:05:34.271 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:05:34.271 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:05:34.271 Found net devices under 0000:d9:00.0: mlx_0_0 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:05:34.271 Found net devices under 0000:d9:00.1: mlx_0_1 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # rdma_device_init 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # uname 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@66 -- # modprobe ib_cm 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@67 -- # modprobe ib_core 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@68 -- # modprobe ib_umad 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@70 -- # modprobe iw_cm 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@530 -- # allocate_nic_ips 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # get_rdma_if_list 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:05:34.271 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:05:34.272 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:34.272 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:05:34.272 altname enp217s0f0np0 00:05:34.272 altname ens818f0np0 00:05:34.272 inet 192.168.100.8/24 scope global mlx_0_0 00:05:34.272 valid_lft forever preferred_lft forever 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:05:34.272 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:34.272 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:05:34.272 altname enp217s0f1np1 00:05:34.272 altname ens818f1np1 00:05:34.272 inet 192.168.100.9/24 scope global mlx_0_1 00:05:34.272 valid_lft forever preferred_lft forever 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # get_rdma_if_list 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:05:34.272 192.168.100.9' 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:05:34.272 192.168.100.9' 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # head -n 1 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:05:34.272 192.168.100.9' 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # tail -n +2 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # head -n 1 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3619057 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3619057 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 3619057 ']' 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:34.272 10:34:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:34.272 [2024-11-07 10:34:00.945840] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:05:34.272 [2024-11-07 10:34:00.945898] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:34.272 [2024-11-07 10:34:01.026173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:34.272 [2024-11-07 10:34:01.067197] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:34.272 [2024-11-07 10:34:01.067237] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:34.272 [2024-11-07 10:34:01.067246] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:34.272 [2024-11-07 10:34:01.067254] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:34.272 [2024-11-07 10:34:01.067262] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:34.272 [2024-11-07 10:34:01.068832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:34.272 [2024-11-07 10:34:01.068920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:34.272 [2024-11-07 10:34:01.068922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.272 10:34:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:34.272 10:34:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:05:34.272 10:34:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:34.272 10:34:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:34.272 10:34:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:34.272 10:34:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:34.272 10:34:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:05:34.272 10:34:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.272 10:34:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:34.272 [2024-11-07 10:34:01.254668] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x238f570/0x2393a60) succeed. 00:05:34.272 [2024-11-07 10:34:01.272016] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2390b60/0x23d5100) succeed. 00:05:34.272 10:34:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.272 10:34:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:34.272 10:34:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.273 10:34:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:34.273 Malloc0 00:05:34.273 10:34:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.273 10:34:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:34.273 10:34:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.273 10:34:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:34.273 Delay0 00:05:34.273 10:34:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.273 10:34:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:34.273 10:34:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.273 10:34:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:34.273 10:34:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.273 10:34:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:34.273 10:34:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.273 10:34:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:34.273 10:34:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.273 10:34:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:05:34.273 10:34:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.273 10:34:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:34.273 [2024-11-07 10:34:01.426073] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:34.273 10:34:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.273 10:34:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:05:34.273 10:34:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.273 10:34:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:34.273 10:34:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.273 10:34:01 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:34.273 [2024-11-07 10:34:01.539230] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:36.183 Initializing NVMe Controllers 00:05:36.183 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:05:36.183 controller IO queue size 128 less than required 00:05:36.183 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:36.183 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:36.183 Initialization complete. Launching workers. 00:05:36.183 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 42758 00:05:36.183 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 42819, failed to submit 62 00:05:36.183 success 42759, unsuccessful 60, failed 0 00:05:36.183 10:34:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:36.183 10:34:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.183 10:34:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:36.183 10:34:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.183 10:34:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:36.183 10:34:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:36.183 10:34:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:36.183 10:34:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:36.183 10:34:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:05:36.183 10:34:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:05:36.183 10:34:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:36.183 10:34:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:36.183 10:34:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:05:36.183 rmmod nvme_rdma 00:05:36.183 rmmod nvme_fabrics 00:05:36.183 10:34:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:36.183 10:34:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:36.183 10:34:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:36.183 10:34:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3619057 ']' 00:05:36.183 10:34:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3619057 00:05:36.183 10:34:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 3619057 ']' 00:05:36.183 10:34:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 3619057 00:05:36.183 10:34:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:05:36.183 10:34:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:36.183 10:34:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3619057 00:05:36.183 10:34:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:05:36.183 10:34:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:05:36.183 10:34:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3619057' 00:05:36.183 killing process with pid 3619057 00:05:36.183 10:34:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@971 -- # kill 3619057 00:05:36.183 10:34:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@976 -- # wait 3619057 00:05:36.443 10:34:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:36.443 10:34:03 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:05:36.443 00:05:36.443 real 0m10.551s 00:05:36.443 user 0m12.856s 00:05:36.443 sys 0m6.032s 00:05:36.443 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:36.443 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:36.443 ************************************ 00:05:36.443 END TEST nvmf_abort 00:05:36.443 ************************************ 00:05:36.443 10:34:04 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:05:36.443 10:34:04 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:36.443 10:34:04 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:36.443 10:34:04 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:36.443 ************************************ 00:05:36.443 START TEST nvmf_ns_hotplug_stress 00:05:36.443 ************************************ 00:05:36.443 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:05:36.707 * Looking for test storage... 00:05:36.707 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:05:36.707 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:36.707 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:05:36.707 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:36.707 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:36.707 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.707 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.707 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.707 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.707 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.707 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.707 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.707 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.707 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.707 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.707 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.707 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:36.707 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:36.707 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.707 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.707 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:36.707 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:36.707 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.707 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:36.707 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.707 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:36.707 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:36.707 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.707 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:36.707 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.707 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.707 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.707 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:36.707 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.707 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:36.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.707 --rc genhtml_branch_coverage=1 00:05:36.707 --rc genhtml_function_coverage=1 00:05:36.707 --rc genhtml_legend=1 00:05:36.707 --rc geninfo_all_blocks=1 00:05:36.707 --rc geninfo_unexecuted_blocks=1 00:05:36.707 00:05:36.707 ' 00:05:36.707 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:36.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.707 --rc genhtml_branch_coverage=1 00:05:36.707 --rc genhtml_function_coverage=1 00:05:36.707 --rc genhtml_legend=1 00:05:36.707 --rc geninfo_all_blocks=1 00:05:36.707 --rc geninfo_unexecuted_blocks=1 00:05:36.707 00:05:36.707 ' 00:05:36.707 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:36.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.707 --rc genhtml_branch_coverage=1 00:05:36.707 --rc genhtml_function_coverage=1 00:05:36.707 --rc genhtml_legend=1 00:05:36.707 --rc geninfo_all_blocks=1 00:05:36.707 --rc geninfo_unexecuted_blocks=1 00:05:36.707 00:05:36.707 ' 00:05:36.707 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:36.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.707 --rc genhtml_branch_coverage=1 00:05:36.707 --rc genhtml_function_coverage=1 00:05:36.707 --rc genhtml_legend=1 00:05:36.707 --rc geninfo_all_blocks=1 00:05:36.707 --rc geninfo_unexecuted_blocks=1 00:05:36.707 00:05:36.707 ' 00:05:36.707 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:36.707 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:36.707 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:36.707 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:36.707 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:36.707 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:36.707 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:36.707 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:36.707 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:36.708 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:36.708 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:36.708 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:36.708 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:05:36.708 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:05:36.708 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:36.708 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:36.708 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:36.708 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:36.708 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:36.708 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:36.708 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:36.708 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:36.708 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:36.708 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.708 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.708 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.708 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:36.708 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.708 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:36.708 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:36.708 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:36.708 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:36.708 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:36.708 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:36.708 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:36.708 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:36.708 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:36.708 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:36.708 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:36.708 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:05:36.708 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:36.708 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:05:36.708 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:36.708 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:36.708 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:36.708 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:36.708 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:36.708 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:36.708 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:36.708 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:36.708 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:36.708 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:36.708 10:34:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:44.958 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:44.958 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:44.958 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:44.958 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:44.958 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:44.958 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:44.958 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:44.958 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:44.958 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:44.958 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:44.958 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:44.958 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:44.958 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:44.958 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:44.958 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:44.958 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:44.958 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:44.958 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:44.958 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:44.958 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:44.958 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:44.958 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:44.958 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:44.958 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:44.958 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:44.958 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:44.958 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:44.958 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:44.958 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:05:44.958 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:05:44.958 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:05:44.958 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:05:44.958 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:05:44.958 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:44.958 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:44.958 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:05:44.958 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:05:44.958 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:05:44.958 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:05:44.958 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:44.958 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:44.958 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:05:44.958 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:05:44.958 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:44.958 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:05:44.958 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:05:44.958 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:05:44.959 Found net devices under 0000:d9:00.0: mlx_0_0 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:05:44.959 Found net devices under 0000:d9:00.1: mlx_0_1 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # rdma_device_init 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # uname 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@530 -- # allocate_nic_ips 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:05:44.959 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:44.959 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:05:44.959 altname enp217s0f0np0 00:05:44.959 altname ens818f0np0 00:05:44.959 inet 192.168.100.8/24 scope global mlx_0_0 00:05:44.959 valid_lft forever preferred_lft forever 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:05:44.959 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:44.959 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:05:44.959 altname enp217s0f1np1 00:05:44.959 altname ens818f1np1 00:05:44.959 inet 192.168.100.9/24 scope global mlx_0_1 00:05:44.959 valid_lft forever preferred_lft forever 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:05:44.959 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:05:44.960 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:05:44.960 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:05:44.960 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:05:44.960 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:05:44.960 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:44.960 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:44.960 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:05:44.960 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:05:44.960 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:05:44.960 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:05:44.960 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:44.960 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:44.960 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:05:44.960 192.168.100.9' 00:05:44.960 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:05:44.960 192.168.100.9' 00:05:44.960 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # head -n 1 00:05:44.960 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:44.960 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:05:44.960 192.168.100.9' 00:05:44.960 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # head -n 1 00:05:44.960 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # tail -n +2 00:05:44.960 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:44.960 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:05:44.960 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:44.960 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:05:44.960 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:05:44.960 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:05:44.960 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:44.960 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:44.960 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:44.960 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:44.960 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3623004 00:05:44.960 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:44.960 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3623004 00:05:44.960 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 3623004 ']' 00:05:44.960 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.960 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:44.960 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.960 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:44.960 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:44.960 [2024-11-07 10:34:11.428728] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:05:44.960 [2024-11-07 10:34:11.428782] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:44.960 [2024-11-07 10:34:11.505585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:44.960 [2024-11-07 10:34:11.544876] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:44.960 [2024-11-07 10:34:11.544912] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:44.960 [2024-11-07 10:34:11.544921] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:44.960 [2024-11-07 10:34:11.544930] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:44.960 [2024-11-07 10:34:11.544936] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:44.960 [2024-11-07 10:34:11.546485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:44.960 [2024-11-07 10:34:11.546567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:44.960 [2024-11-07 10:34:11.546570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.960 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:44.960 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:05:44.960 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:44.960 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:44.960 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:44.960 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:44.960 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:44.960 10:34:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:05:44.960 [2024-11-07 10:34:11.876038] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xc68570/0xc6ca60) succeed. 00:05:44.960 [2024-11-07 10:34:11.885290] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xc69b60/0xcae100) succeed. 00:05:44.960 10:34:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:44.960 10:34:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:44.960 [2024-11-07 10:34:12.384409] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:44.960 10:34:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:05:44.960 10:34:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:45.219 Malloc0 00:05:45.219 10:34:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:45.478 Delay0 00:05:45.478 10:34:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.737 10:34:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:45.737 NULL1 00:05:45.737 10:34:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:45.996 10:34:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:45.996 10:34:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3623308 00:05:45.996 10:34:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3623308 00:05:45.996 10:34:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.375 Read completed with error (sct=0, sc=11) 00:05:47.375 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.375 10:34:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.375 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.375 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.375 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.375 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.375 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.375 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.375 10:34:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:47.375 10:34:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:47.662 true 00:05:47.662 10:34:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3623308 00:05:47.662 10:34:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.599 10:34:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.600 10:34:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:48.600 10:34:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:48.859 true 00:05:48.859 10:34:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3623308 00:05:48.859 10:34:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:49.796 10:34:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:49.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:49.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:49.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:49.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:49.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:49.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:49.796 10:34:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:49.796 10:34:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:50.055 true 00:05:50.055 10:34:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3623308 00:05:50.055 10:34:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.992 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.992 10:34:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.992 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.992 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.992 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.992 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.992 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.992 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.992 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.992 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.992 10:34:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:50.992 10:34:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:51.252 true 00:05:51.252 10:34:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3623308 00:05:51.252 10:34:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.188 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.188 10:34:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.188 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.188 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.188 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.188 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.188 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.188 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.188 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.188 10:34:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:52.188 10:34:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:52.445 true 00:05:52.445 10:34:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3623308 00:05:52.445 10:34:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.381 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:53.381 10:34:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.381 10:34:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:53.381 10:34:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:53.639 true 00:05:53.639 10:34:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3623308 00:05:53.639 10:34:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.898 10:34:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.898 10:34:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:53.898 10:34:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:54.157 true 00:05:54.157 10:34:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3623308 00:05:54.157 10:34:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.534 10:34:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.534 10:34:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:55.534 10:34:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:55.793 true 00:05:55.793 10:34:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3623308 00:05:55.793 10:34:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.728 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:56.728 10:34:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.728 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:56.728 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:56.728 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:56.728 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:56.728 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:56.728 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:56.728 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:56.728 10:34:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:56.728 10:34:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:56.986 true 00:05:56.986 10:34:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3623308 00:05:56.986 10:34:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.923 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.923 10:34:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.923 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.923 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.923 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.923 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.923 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.923 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.923 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.923 10:34:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:57.923 10:34:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:58.182 true 00:05:58.182 10:34:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3623308 00:05:58.182 10:34:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.118 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.118 10:34:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.118 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.118 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.118 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.118 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.118 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.118 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.118 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.118 10:34:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:59.118 10:34:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:59.377 true 00:05:59.377 10:34:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3623308 00:05:59.377 10:34:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.312 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.313 10:34:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.313 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.313 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.313 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.313 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.313 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.313 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.313 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.313 10:34:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:00.313 10:34:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:00.571 true 00:06:00.571 10:34:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3623308 00:06:00.571 10:34:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.507 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:01.507 10:34:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.507 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:01.507 10:34:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:01.507 10:34:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:01.765 true 00:06:01.765 10:34:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3623308 00:06:01.766 10:34:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.024 10:34:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.024 10:34:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:02.024 10:34:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:02.282 true 00:06:02.282 10:34:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3623308 00:06:02.282 10:34:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.659 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:03.659 10:34:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.659 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:03.659 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:03.659 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:03.659 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:03.659 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:03.659 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:03.659 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:03.659 10:34:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:03.659 10:34:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:03.659 true 00:06:03.659 10:34:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3623308 00:06:03.659 10:34:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.595 10:34:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.854 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.854 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.854 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.854 10:34:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:04.854 10:34:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:05.112 true 00:06:05.112 10:34:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3623308 00:06:05.112 10:34:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.681 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.953 10:34:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.953 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.953 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.953 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.953 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.953 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.953 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.953 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.953 10:34:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:05.953 10:34:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:06.265 true 00:06:06.265 10:34:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3623308 00:06:06.265 10:34:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.206 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.206 10:34:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.206 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.206 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.206 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.206 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.206 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.206 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.206 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.206 10:34:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:07.206 10:34:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:07.465 true 00:06:07.465 10:34:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3623308 00:06:07.465 10:34:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.401 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.401 10:34:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.401 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.401 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.401 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.401 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.401 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.401 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.401 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.401 10:34:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:08.401 10:34:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:08.659 true 00:06:08.659 10:34:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3623308 00:06:08.659 10:34:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.595 10:34:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.595 10:34:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:09.595 10:34:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:09.853 true 00:06:09.853 10:34:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3623308 00:06:09.853 10:34:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.112 10:34:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.112 10:34:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:10.112 10:34:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:10.371 true 00:06:10.371 10:34:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3623308 00:06:10.371 10:34:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.747 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.747 10:34:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.747 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.747 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.747 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.747 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.747 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.747 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.747 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.747 10:34:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:11.747 10:34:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:11.747 true 00:06:11.747 10:34:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3623308 00:06:11.747 10:34:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.683 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.683 10:34:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.683 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.683 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.683 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.683 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.683 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.683 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.942 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.942 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.942 10:34:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:12.942 10:34:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:12.942 true 00:06:13.201 10:34:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3623308 00:06:13.201 10:34:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.769 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:14.028 10:34:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:14.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:14.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:14.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:14.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:14.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:14.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:14.028 10:34:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:14.028 10:34:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:14.288 true 00:06:14.288 10:34:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3623308 00:06:14.288 10:34:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.225 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.225 10:34:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.225 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.225 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.225 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.225 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.225 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.225 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.225 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.225 10:34:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:15.225 10:34:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:15.484 true 00:06:15.484 10:34:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3623308 00:06:15.484 10:34:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.421 10:34:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.421 10:34:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:16.421 10:34:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:16.680 true 00:06:16.680 10:34:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3623308 00:06:16.680 10:34:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.939 10:34:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.198 10:34:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:17.198 10:34:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:17.198 true 00:06:17.198 10:34:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3623308 00:06:17.199 10:34:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.457 10:34:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.716 10:34:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:17.716 10:34:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:17.975 true 00:06:17.975 10:34:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3623308 00:06:17.975 10:34:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.975 10:34:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.234 10:34:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:18.234 10:34:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:18.234 Initializing NVMe Controllers 00:06:18.234 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:06:18.234 Controller IO queue size 128, less than required. 00:06:18.234 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:18.234 Controller IO queue size 128, less than required. 00:06:18.234 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:18.234 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:18.234 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:18.234 Initialization complete. Launching workers. 00:06:18.234 ======================================================== 00:06:18.234 Latency(us) 00:06:18.234 Device Information : IOPS MiB/s Average min max 00:06:18.234 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5409.16 2.64 21385.42 966.35 1007067.17 00:06:18.234 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 35882.95 17.52 3566.93 1714.99 285896.73 00:06:18.234 ======================================================== 00:06:18.234 Total : 41292.11 20.16 5901.11 966.35 1007067.17 00:06:18.234 00:06:18.492 true 00:06:18.492 10:34:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3623308 00:06:18.492 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3623308) - No such process 00:06:18.492 10:34:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3623308 00:06:18.492 10:34:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.751 10:34:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:19.010 10:34:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:19.010 10:34:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:19.010 10:34:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:19.010 10:34:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:19.010 10:34:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:19.010 null0 00:06:19.010 10:34:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:19.010 10:34:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:19.010 10:34:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:19.269 null1 00:06:19.269 10:34:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:19.269 10:34:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:19.269 10:34:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:19.528 null2 00:06:19.528 10:34:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:19.528 10:34:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:19.528 10:34:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:19.787 null3 00:06:19.787 10:34:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:19.787 10:34:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:19.787 10:34:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:19.787 null4 00:06:19.787 10:34:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:19.787 10:34:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:19.787 10:34:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:20.048 null5 00:06:20.049 10:34:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:20.049 10:34:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:20.049 10:34:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:20.308 null6 00:06:20.308 10:34:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:20.308 10:34:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:20.308 10:34:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:20.567 null7 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3629309 3629311 3629315 3629317 3629320 3629324 3629327 3629330 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.568 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:20.829 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:20.829 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:20.829 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.829 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:20.829 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:20.829 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:20.829 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:20.829 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:20.829 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.829 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.829 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.829 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:20.829 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.829 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:20.829 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.829 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.829 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:20.829 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.829 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.829 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:20.829 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.829 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.829 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:20.829 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.829 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.829 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:20.829 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.829 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.829 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:20.829 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.829 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.829 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:21.089 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:21.089 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:21.089 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:21.089 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:21.089 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:21.089 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:21.089 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:21.089 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.347 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.347 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.347 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:21.347 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.347 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.347 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:21.347 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.347 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.347 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:21.347 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.347 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.347 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:21.347 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.347 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.347 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:21.347 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.347 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.347 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:21.347 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.347 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.348 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:21.348 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.348 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.348 10:34:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:21.607 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:21.607 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:21.607 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:21.607 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:21.607 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:21.607 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:21.607 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.607 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:21.607 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.607 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.607 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:21.607 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.607 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.607 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:21.607 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.607 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.607 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:21.867 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.867 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.867 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:21.867 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.867 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.867 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:21.867 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.867 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.867 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:21.867 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.867 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.867 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:21.867 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.867 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.867 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:21.867 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:21.867 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:21.867 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:21.867 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:21.867 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:21.867 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:21.867 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.867 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:22.131 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.131 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.131 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:22.131 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.131 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.131 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:22.131 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.131 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.131 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:22.131 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.131 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.131 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:22.131 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.131 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.131 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:22.131 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.131 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.131 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:22.131 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.131 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.131 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:22.131 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.131 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.131 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:22.131 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:22.390 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:22.390 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.390 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:22.390 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:22.390 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:22.390 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:22.390 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:22.390 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.390 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.390 10:34:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:22.649 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.649 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.649 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:22.649 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.649 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.649 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:22.649 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.649 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.649 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:22.649 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.649 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.649 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:22.649 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.649 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.649 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:22.649 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.649 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.649 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:22.649 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.649 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.649 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:22.649 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:22.649 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:22.649 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:22.908 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.908 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:22.908 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:22.908 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:22.908 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:22.908 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.908 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.908 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:22.908 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.908 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.908 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:22.908 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.908 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.908 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:22.908 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.908 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.908 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:22.908 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.908 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.908 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:22.908 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.908 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.908 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:22.908 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.908 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.908 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.908 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:22.908 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.908 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:22.908 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:23.167 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:23.167 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.167 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:23.167 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:23.167 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.167 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.167 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:23.167 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:23.167 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:23.167 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:23.426 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.426 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.426 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:23.426 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.426 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.426 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:23.426 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.426 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.426 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:23.426 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.426 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.426 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:23.426 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:23.426 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.426 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.426 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:23.426 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.426 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.426 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:23.426 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.426 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.426 10:34:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:23.685 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:23.685 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.685 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.685 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:23.685 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:23.685 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:23.685 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.685 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:23.685 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:23.685 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:23.685 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.685 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.685 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:23.685 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:23.685 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.685 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.685 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:23.685 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.685 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.686 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:23.686 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.686 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.686 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:23.686 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.686 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.686 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:23.944 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.944 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.945 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:23.945 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.945 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.945 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:23.945 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:23.945 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.945 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:23.945 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.945 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.945 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:23.945 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:23.945 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:23.945 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:23.945 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:24.203 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.203 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.203 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:24.203 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.203 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.204 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:24.204 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.204 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.204 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:24.204 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.204 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.204 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:24.204 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.204 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.204 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:24.204 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.204 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.204 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:24.204 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:24.204 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.204 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.204 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:24.462 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:24.462 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:24.462 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:24.462 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:24.462 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.462 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.462 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:24.462 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.462 10:34:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:24.462 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.462 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.462 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.462 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.462 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.462 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.720 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.720 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.720 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.720 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.720 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.720 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.720 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.720 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.720 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:24.720 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:24.720 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:24.720 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:24.720 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:06:24.720 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:06:24.720 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:24.720 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:24.720 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:06:24.720 rmmod nvme_rdma 00:06:24.720 rmmod nvme_fabrics 00:06:24.720 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:24.721 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:24.721 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:24.721 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3623004 ']' 00:06:24.721 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3623004 00:06:24.721 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 3623004 ']' 00:06:24.721 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 3623004 00:06:24.721 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:06:24.721 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:24.721 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3623004 00:06:24.721 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:06:24.721 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:06:24.721 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3623004' 00:06:24.721 killing process with pid 3623004 00:06:24.721 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 3623004 00:06:24.721 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 3623004 00:06:24.979 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:24.979 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:06:24.979 00:06:24.979 real 0m48.453s 00:06:24.979 user 3m19.751s 00:06:24.979 sys 0m14.285s 00:06:24.979 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:24.979 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:24.979 ************************************ 00:06:24.979 END TEST nvmf_ns_hotplug_stress 00:06:24.979 ************************************ 00:06:24.979 10:34:52 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:06:24.979 10:34:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:24.979 10:34:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:24.979 10:34:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:24.979 ************************************ 00:06:24.979 START TEST nvmf_delete_subsystem 00:06:24.980 ************************************ 00:06:24.980 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:06:24.980 * Looking for test storage... 00:06:25.238 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:25.238 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:25.238 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:06:25.238 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:25.238 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:25.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.239 --rc genhtml_branch_coverage=1 00:06:25.239 --rc genhtml_function_coverage=1 00:06:25.239 --rc genhtml_legend=1 00:06:25.239 --rc geninfo_all_blocks=1 00:06:25.239 --rc geninfo_unexecuted_blocks=1 00:06:25.239 00:06:25.239 ' 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:25.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.239 --rc genhtml_branch_coverage=1 00:06:25.239 --rc genhtml_function_coverage=1 00:06:25.239 --rc genhtml_legend=1 00:06:25.239 --rc geninfo_all_blocks=1 00:06:25.239 --rc geninfo_unexecuted_blocks=1 00:06:25.239 00:06:25.239 ' 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:25.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.239 --rc genhtml_branch_coverage=1 00:06:25.239 --rc genhtml_function_coverage=1 00:06:25.239 --rc genhtml_legend=1 00:06:25.239 --rc geninfo_all_blocks=1 00:06:25.239 --rc geninfo_unexecuted_blocks=1 00:06:25.239 00:06:25.239 ' 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:25.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.239 --rc genhtml_branch_coverage=1 00:06:25.239 --rc genhtml_function_coverage=1 00:06:25.239 --rc genhtml_legend=1 00:06:25.239 --rc geninfo_all_blocks=1 00:06:25.239 --rc geninfo_unexecuted_blocks=1 00:06:25.239 00:06:25.239 ' 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:25.239 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.240 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:25.240 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:25.240 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:25.240 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:25.240 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:25.240 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:25.240 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:25.240 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:25.240 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:25.240 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:25.240 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:25.240 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:25.240 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:06:25.240 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:25.240 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:25.240 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:25.240 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:25.240 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:25.240 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:25.240 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:25.240 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:25.240 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:25.240 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:25.240 10:34:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:06:33.362 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:06:33.362 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:06:33.362 Found net devices under 0000:d9:00.0: mlx_0_0 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:33.362 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:06:33.363 Found net devices under 0000:d9:00.1: mlx_0_1 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # rdma_device_init 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # uname 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@530 -- # allocate_nic_ips 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:06:33.363 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:33.363 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:06:33.363 altname enp217s0f0np0 00:06:33.363 altname ens818f0np0 00:06:33.363 inet 192.168.100.8/24 scope global mlx_0_0 00:06:33.363 valid_lft forever preferred_lft forever 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:06:33.363 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:33.363 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:06:33.363 altname enp217s0f1np1 00:06:33.363 altname ens818f1np1 00:06:33.363 inet 192.168.100.9/24 scope global mlx_0_1 00:06:33.363 valid_lft forever preferred_lft forever 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:06:33.363 192.168.100.9' 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:06:33.363 192.168.100.9' 00:06:33.363 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # head -n 1 00:06:33.364 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:33.364 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:06:33.364 192.168.100.9' 00:06:33.364 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # tail -n +2 00:06:33.364 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # head -n 1 00:06:33.364 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:33.364 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:06:33.364 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:33.364 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:06:33.364 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:06:33.364 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:06:33.364 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:33.364 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:33.364 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:33.364 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:33.364 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3633498 00:06:33.364 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:33.364 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3633498 00:06:33.364 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 3633498 ']' 00:06:33.364 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.364 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:33.364 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.364 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:33.364 10:34:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:33.364 [2024-11-07 10:34:59.827322] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:06:33.364 [2024-11-07 10:34:59.827372] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:33.364 [2024-11-07 10:34:59.904597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:33.364 [2024-11-07 10:34:59.942592] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:33.364 [2024-11-07 10:34:59.942632] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:33.364 [2024-11-07 10:34:59.942642] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:33.364 [2024-11-07 10:34:59.942650] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:33.364 [2024-11-07 10:34:59.942656] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:33.364 [2024-11-07 10:34:59.943856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.364 [2024-11-07 10:34:59.943859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.364 10:35:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:33.364 10:35:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:06:33.364 10:35:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:33.364 10:35:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:33.364 10:35:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:33.364 10:35:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:33.364 10:35:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:06:33.364 10:35:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.364 10:35:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:33.364 [2024-11-07 10:35:00.108694] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x173b730/0x173fc20) succeed. 00:06:33.364 [2024-11-07 10:35:00.117542] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x173cc80/0x17812c0) succeed. 00:06:33.364 10:35:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.364 10:35:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:33.364 10:35:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.364 10:35:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:33.364 10:35:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.364 10:35:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:33.364 10:35:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.364 10:35:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:33.364 [2024-11-07 10:35:00.206607] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:33.364 10:35:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.364 10:35:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:33.364 10:35:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.364 10:35:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:33.364 NULL1 00:06:33.364 10:35:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.364 10:35:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:33.364 10:35:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.364 10:35:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:33.364 Delay0 00:06:33.364 10:35:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.364 10:35:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.364 10:35:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.364 10:35:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:33.364 10:35:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.364 10:35:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3633726 00:06:33.364 10:35:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:33.364 10:35:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:33.364 [2024-11-07 10:35:00.313922] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:34.740 10:35:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:34.740 10:35:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.740 10:35:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:36.122 NVMe io qpair process completion error 00:06:36.122 NVMe io qpair process completion error 00:06:36.122 NVMe io qpair process completion error 00:06:36.122 NVMe io qpair process completion error 00:06:36.122 NVMe io qpair process completion error 00:06:36.122 NVMe io qpair process completion error 00:06:36.122 10:35:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.122 10:35:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:36.122 10:35:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3633726 00:06:36.122 10:35:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:36.380 10:35:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:36.380 10:35:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3633726 00:06:36.380 10:35:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:36.948 Read completed with error (sct=0, sc=8) 00:06:36.948 starting I/O failed: -6 00:06:36.948 Write completed with error (sct=0, sc=8) 00:06:36.948 starting I/O failed: -6 00:06:36.948 Read completed with error (sct=0, sc=8) 00:06:36.948 starting I/O failed: -6 00:06:36.948 Read completed with error (sct=0, sc=8) 00:06:36.948 starting I/O failed: -6 00:06:36.948 Read completed with error (sct=0, sc=8) 00:06:36.948 starting I/O failed: -6 00:06:36.948 Write completed with error (sct=0, sc=8) 00:06:36.948 starting I/O failed: -6 00:06:36.948 Write completed with error (sct=0, sc=8) 00:06:36.948 starting I/O failed: -6 00:06:36.948 Read completed with error (sct=0, sc=8) 00:06:36.948 starting I/O failed: -6 00:06:36.948 Write completed with error (sct=0, sc=8) 00:06:36.948 starting I/O failed: -6 00:06:36.948 Read completed with error (sct=0, sc=8) 00:06:36.948 starting I/O failed: -6 00:06:36.948 Write completed with error (sct=0, sc=8) 00:06:36.948 starting I/O failed: -6 00:06:36.948 Write completed with error (sct=0, sc=8) 00:06:36.948 starting I/O failed: -6 00:06:36.948 Read completed with error (sct=0, sc=8) 00:06:36.948 starting I/O failed: -6 00:06:36.948 Write completed with error (sct=0, sc=8) 00:06:36.948 starting I/O failed: -6 00:06:36.948 Read completed with error (sct=0, sc=8) 00:06:36.948 starting I/O failed: -6 00:06:36.948 Read completed with error (sct=0, sc=8) 00:06:36.948 starting I/O failed: -6 00:06:36.948 Read completed with error (sct=0, sc=8) 00:06:36.948 starting I/O failed: -6 00:06:36.948 Read completed with error (sct=0, sc=8) 00:06:36.948 starting I/O failed: -6 00:06:36.948 Write completed with error (sct=0, sc=8) 00:06:36.948 starting I/O failed: -6 00:06:36.948 Write completed with error (sct=0, sc=8) 00:06:36.948 starting I/O failed: -6 00:06:36.948 Read completed with error (sct=0, sc=8) 00:06:36.948 starting I/O failed: -6 00:06:36.948 Read completed with error (sct=0, sc=8) 00:06:36.948 starting I/O failed: -6 00:06:36.948 Read completed with error (sct=0, sc=8) 00:06:36.948 starting I/O failed: -6 00:06:36.948 Write completed with error (sct=0, sc=8) 00:06:36.948 starting I/O failed: -6 00:06:36.948 Write completed with error (sct=0, sc=8) 00:06:36.948 starting I/O failed: -6 00:06:36.948 Read completed with error (sct=0, sc=8) 00:06:36.948 starting I/O failed: -6 00:06:36.948 Read completed with error (sct=0, sc=8) 00:06:36.948 starting I/O failed: -6 00:06:36.948 Write completed with error (sct=0, sc=8) 00:06:36.948 starting I/O failed: -6 00:06:36.948 Write completed with error (sct=0, sc=8) 00:06:36.948 starting I/O failed: -6 00:06:36.948 Read completed with error (sct=0, sc=8) 00:06:36.948 starting I/O failed: -6 00:06:36.948 Read completed with error (sct=0, sc=8) 00:06:36.948 starting I/O failed: -6 00:06:36.948 Read completed with error (sct=0, sc=8) 00:06:36.948 starting I/O failed: -6 00:06:36.948 Read completed with error (sct=0, sc=8) 00:06:36.948 starting I/O failed: -6 00:06:36.948 Write completed with error (sct=0, sc=8) 00:06:36.948 starting I/O failed: -6 00:06:36.948 Write completed with error (sct=0, sc=8) 00:06:36.948 starting I/O failed: -6 00:06:36.948 Read completed with error (sct=0, sc=8) 00:06:36.948 starting I/O failed: -6 00:06:36.948 Write completed with error (sct=0, sc=8) 00:06:36.948 starting I/O failed: -6 00:06:36.948 Read completed with error (sct=0, sc=8) 00:06:36.948 starting I/O failed: -6 00:06:36.948 Read completed with error (sct=0, sc=8) 00:06:36.948 starting I/O failed: -6 00:06:36.948 Write completed with error (sct=0, sc=8) 00:06:36.948 starting I/O failed: -6 00:06:36.948 Read completed with error (sct=0, sc=8) 00:06:36.948 starting I/O failed: -6 00:06:36.948 Write completed with error (sct=0, sc=8) 00:06:36.948 starting I/O failed: -6 00:06:36.948 Read completed with error (sct=0, sc=8) 00:06:36.948 starting I/O failed: -6 00:06:36.948 Read completed with error (sct=0, sc=8) 00:06:36.948 starting I/O failed: -6 00:06:36.949 Write completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Write completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Write completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Write completed with error (sct=0, sc=8) 00:06:36.949 Write completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Write completed with error (sct=0, sc=8) 00:06:36.949 Write completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Write completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Write completed with error (sct=0, sc=8) 00:06:36.949 Write completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Write completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Write completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Write completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Write completed with error (sct=0, sc=8) 00:06:36.949 Write completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Write completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Write completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Write completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Write completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Write completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Write completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Write completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Write completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Write completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Write completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 starting I/O failed: -6 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Write completed with error (sct=0, sc=8) 00:06:36.949 Write completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Write completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Write completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Write completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Write completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Write completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Write completed with error (sct=0, sc=8) 00:06:36.949 Write completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Write completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Write completed with error (sct=0, sc=8) 00:06:36.949 Write completed with error (sct=0, sc=8) 00:06:36.949 Write completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Write completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Write completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Read completed with error (sct=0, sc=8) 00:06:36.949 Write completed with error (sct=0, sc=8) 00:06:36.950 Read completed with error (sct=0, sc=8) 00:06:36.950 Read completed with error (sct=0, sc=8) 00:06:36.950 Read completed with error (sct=0, sc=8) 00:06:36.950 Read completed with error (sct=0, sc=8) 00:06:36.950 Write completed with error (sct=0, sc=8) 00:06:36.950 Read completed with error (sct=0, sc=8) 00:06:36.950 Write completed with error (sct=0, sc=8) 00:06:36.950 Read completed with error (sct=0, sc=8) 00:06:36.950 Read completed with error (sct=0, sc=8) 00:06:36.950 Write completed with error (sct=0, sc=8) 00:06:36.950 Write completed with error (sct=0, sc=8) 00:06:36.950 Read completed with error (sct=0, sc=8) 00:06:36.950 Read completed with error (sct=0, sc=8) 00:06:36.950 Read completed with error (sct=0, sc=8) 00:06:36.950 Write completed with error (sct=0, sc=8) 00:06:36.950 Read completed with error (sct=0, sc=8) 00:06:36.950 Write completed with error (sct=0, sc=8) 00:06:36.950 Read completed with error (sct=0, sc=8) 00:06:36.950 Read completed with error (sct=0, sc=8) 00:06:36.950 Write completed with error (sct=0, sc=8) 00:06:36.950 Read completed with error (sct=0, sc=8) 00:06:36.950 Read completed with error (sct=0, sc=8) 00:06:36.950 Write completed with error (sct=0, sc=8) 00:06:36.950 Write completed with error (sct=0, sc=8) 00:06:36.950 Read completed with error (sct=0, sc=8) 00:06:36.950 Read completed with error (sct=0, sc=8) 00:06:36.950 Read completed with error (sct=0, sc=8) 00:06:36.950 Read completed with error (sct=0, sc=8) 00:06:36.950 Read completed with error (sct=0, sc=8) 00:06:36.950 Read completed with error (sct=0, sc=8) 00:06:36.950 Write completed with error (sct=0, sc=8) 00:06:36.950 Read completed with error (sct=0, sc=8) 00:06:36.950 Read completed with error (sct=0, sc=8) 00:06:36.950 Read completed with error (sct=0, sc=8) 00:06:36.950 Read completed with error (sct=0, sc=8) 00:06:36.950 Read completed with error (sct=0, sc=8) 00:06:36.950 Read completed with error (sct=0, sc=8) 00:06:36.950 Read completed with error (sct=0, sc=8) 00:06:36.950 Read completed with error (sct=0, sc=8) 00:06:36.950 Read completed with error (sct=0, sc=8) 00:06:36.950 Read completed with error (sct=0, sc=8) 00:06:36.950 Read completed with error (sct=0, sc=8) 00:06:36.950 Read completed with error (sct=0, sc=8) 00:06:36.950 Read completed with error (sct=0, sc=8) 00:06:36.950 Read completed with error (sct=0, sc=8) 00:06:36.950 Read completed with error (sct=0, sc=8) 00:06:36.950 Read completed with error (sct=0, sc=8) 00:06:36.950 Write completed with error (sct=0, sc=8) 00:06:36.950 Read completed with error (sct=0, sc=8) 00:06:36.950 Initializing NVMe Controllers 00:06:36.950 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:06:36.950 Controller IO queue size 128, less than required. 00:06:36.950 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:36.950 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:36.950 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:36.950 Initialization complete. Launching workers. 00:06:36.950 ======================================================== 00:06:36.950 Latency(us) 00:06:36.950 Device Information : IOPS MiB/s Average min max 00:06:36.950 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.48 0.04 1593640.00 1000207.32 2975366.02 00:06:36.950 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.48 0.04 1595117.42 1001339.93 2976568.65 00:06:36.950 ======================================================== 00:06:36.950 Total : 160.95 0.08 1594378.71 1000207.32 2976568.65 00:06:36.950 00:06:36.950 10:35:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:36.950 10:35:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3633726 00:06:36.950 10:35:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:36.950 [2024-11-07 10:35:04.413395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:06:36.950 [2024-11-07 10:35:04.413440] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:06:36.950 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:37.517 10:35:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:37.517 10:35:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3633726 00:06:37.517 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3633726) - No such process 00:06:37.517 10:35:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3633726 00:06:37.517 10:35:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:06:37.517 10:35:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3633726 00:06:37.517 10:35:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:06:37.517 10:35:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.517 10:35:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:06:37.517 10:35:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.517 10:35:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 3633726 00:06:37.517 10:35:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:06:37.517 10:35:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:37.517 10:35:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:37.517 10:35:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:37.517 10:35:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:37.517 10:35:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.517 10:35:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:37.517 10:35:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.517 10:35:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:37.517 10:35:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.517 10:35:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:37.517 [2024-11-07 10:35:04.930438] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:37.517 10:35:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.517 10:35:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.517 10:35:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.517 10:35:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:37.517 10:35:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.517 10:35:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3634509 00:06:37.517 10:35:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:37.517 10:35:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:37.517 10:35:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3634509 00:06:37.517 10:35:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:37.517 [2024-11-07 10:35:05.026680] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:38.085 10:35:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:38.085 10:35:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3634509 00:06:38.085 10:35:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:38.343 10:35:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:38.343 10:35:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3634509 00:06:38.343 10:35:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:38.910 10:35:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:38.910 10:35:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3634509 00:06:38.910 10:35:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:39.477 10:35:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:39.477 10:35:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3634509 00:06:39.477 10:35:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:40.044 10:35:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:40.044 10:35:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3634509 00:06:40.044 10:35:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:40.612 10:35:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:40.612 10:35:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3634509 00:06:40.612 10:35:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:40.871 10:35:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:40.871 10:35:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3634509 00:06:40.871 10:35:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:41.438 10:35:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:41.438 10:35:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3634509 00:06:41.438 10:35:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:42.005 10:35:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:42.005 10:35:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3634509 00:06:42.005 10:35:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:42.570 10:35:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:42.570 10:35:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3634509 00:06:42.570 10:35:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:43.138 10:35:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:43.138 10:35:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3634509 00:06:43.138 10:35:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:43.402 10:35:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:43.402 10:35:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3634509 00:06:43.402 10:35:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:44.087 10:35:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:44.087 10:35:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3634509 00:06:44.087 10:35:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:44.355 10:35:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:44.355 10:35:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3634509 00:06:44.355 10:35:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:44.614 Initializing NVMe Controllers 00:06:44.614 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:06:44.614 Controller IO queue size 128, less than required. 00:06:44.614 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:44.614 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:44.614 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:44.614 Initialization complete. Launching workers. 00:06:44.614 ======================================================== 00:06:44.614 Latency(us) 00:06:44.614 Device Information : IOPS MiB/s Average min max 00:06:44.614 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001311.58 1000059.70 1004000.83 00:06:44.614 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002478.33 1000106.66 1006547.57 00:06:44.614 ======================================================== 00:06:44.614 Total : 256.00 0.12 1001894.95 1000059.70 1006547.57 00:06:44.614 00:06:44.873 10:35:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:44.873 10:35:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3634509 00:06:44.873 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3634509) - No such process 00:06:44.873 10:35:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3634509 00:06:44.873 10:35:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:44.873 10:35:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:44.873 10:35:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:44.873 10:35:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:44.873 10:35:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:06:44.873 10:35:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:06:44.873 10:35:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:44.873 10:35:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:44.873 10:35:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:06:44.873 rmmod nvme_rdma 00:06:45.132 rmmod nvme_fabrics 00:06:45.132 10:35:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:45.132 10:35:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:45.132 10:35:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:45.132 10:35:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3633498 ']' 00:06:45.132 10:35:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3633498 00:06:45.132 10:35:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 3633498 ']' 00:06:45.132 10:35:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 3633498 00:06:45.132 10:35:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:06:45.132 10:35:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:45.132 10:35:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3633498 00:06:45.132 10:35:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:45.132 10:35:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:45.132 10:35:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3633498' 00:06:45.132 killing process with pid 3633498 00:06:45.132 10:35:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 3633498 00:06:45.132 10:35:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 3633498 00:06:45.391 10:35:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:45.391 10:35:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:06:45.391 00:06:45.391 real 0m20.302s 00:06:45.391 user 0m49.018s 00:06:45.391 sys 0m6.567s 00:06:45.391 10:35:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:45.391 10:35:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:45.391 ************************************ 00:06:45.391 END TEST nvmf_delete_subsystem 00:06:45.391 ************************************ 00:06:45.391 10:35:12 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:06:45.391 10:35:12 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:45.391 10:35:12 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:45.391 10:35:12 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:45.391 ************************************ 00:06:45.391 START TEST nvmf_host_management 00:06:45.391 ************************************ 00:06:45.391 10:35:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:06:45.391 * Looking for test storage... 00:06:45.391 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:45.391 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:45.391 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:06:45.391 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:45.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.652 --rc genhtml_branch_coverage=1 00:06:45.652 --rc genhtml_function_coverage=1 00:06:45.652 --rc genhtml_legend=1 00:06:45.652 --rc geninfo_all_blocks=1 00:06:45.652 --rc geninfo_unexecuted_blocks=1 00:06:45.652 00:06:45.652 ' 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:45.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.652 --rc genhtml_branch_coverage=1 00:06:45.652 --rc genhtml_function_coverage=1 00:06:45.652 --rc genhtml_legend=1 00:06:45.652 --rc geninfo_all_blocks=1 00:06:45.652 --rc geninfo_unexecuted_blocks=1 00:06:45.652 00:06:45.652 ' 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:45.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.652 --rc genhtml_branch_coverage=1 00:06:45.652 --rc genhtml_function_coverage=1 00:06:45.652 --rc genhtml_legend=1 00:06:45.652 --rc geninfo_all_blocks=1 00:06:45.652 --rc geninfo_unexecuted_blocks=1 00:06:45.652 00:06:45.652 ' 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:45.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.652 --rc genhtml_branch_coverage=1 00:06:45.652 --rc genhtml_function_coverage=1 00:06:45.652 --rc genhtml_legend=1 00:06:45.652 --rc geninfo_all_blocks=1 00:06:45.652 --rc geninfo_unexecuted_blocks=1 00:06:45.652 00:06:45.652 ' 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.652 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:45.653 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.653 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:45.653 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:45.653 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:45.653 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:45.653 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:45.653 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:45.653 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:45.653 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:45.653 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:45.653 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:45.653 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:45.653 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:45.653 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:45.653 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:45.653 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:06:45.653 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:45.653 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:45.653 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:45.653 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:45.653 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:45.653 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:45.653 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:45.653 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:45.653 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:45.653 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:45.653 10:35:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:52.224 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:52.224 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:52.224 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:52.224 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:52.224 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:52.224 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:06:52.225 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:06:52.225 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:06:52.225 Found net devices under 0000:d9:00.0: mlx_0_0 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:06:52.225 Found net devices under 0000:d9:00.1: mlx_0_1 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # rdma_device_init 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # uname 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe ib_cm 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe ib_core 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe ib_umad 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@70 -- # modprobe iw_cm 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@530 -- # allocate_nic_ips 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # get_rdma_if_list 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:06:52.225 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:06:52.226 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:06:52.226 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:52.226 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:06:52.226 altname enp217s0f0np0 00:06:52.226 altname ens818f0np0 00:06:52.226 inet 192.168.100.8/24 scope global mlx_0_0 00:06:52.226 valid_lft forever preferred_lft forever 00:06:52.226 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:52.226 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:06:52.226 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:52.226 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:52.226 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:52.226 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:52.226 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:06:52.226 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:06:52.226 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:06:52.226 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:52.226 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:06:52.226 altname enp217s0f1np1 00:06:52.226 altname ens818f1np1 00:06:52.226 inet 192.168.100.9/24 scope global mlx_0_1 00:06:52.226 valid_lft forever preferred_lft forever 00:06:52.226 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:52.226 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:52.226 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:52.226 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:06:52.226 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:06:52.226 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # get_rdma_if_list 00:06:52.226 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:52.226 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:52.226 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:52.226 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:52.226 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:52.226 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:52.226 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:52.226 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:52.226 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:52.226 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:06:52.226 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:52.226 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:52.226 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:52.226 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:52.226 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:52.226 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:52.226 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:06:52.226 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:52.226 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:06:52.226 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:52.226 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:52.226 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:52.226 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:52.486 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:52.486 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:06:52.486 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:52.486 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:52.486 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:52.486 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:52.486 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:06:52.486 192.168.100.9' 00:06:52.486 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:06:52.486 192.168.100.9' 00:06:52.486 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # head -n 1 00:06:52.486 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:52.486 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:06:52.486 192.168.100.9' 00:06:52.486 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # tail -n +2 00:06:52.486 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # head -n 1 00:06:52.486 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:52.486 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:06:52.486 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:52.486 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:06:52.486 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:06:52.486 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:06:52.486 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:52.486 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:52.486 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:52.486 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:52.486 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:52.486 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:52.486 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3638995 00:06:52.486 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3638995 00:06:52.486 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:52.486 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 3638995 ']' 00:06:52.486 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.486 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:52.486 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.486 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:52.486 10:35:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:52.486 [2024-11-07 10:35:20.019353] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:06:52.486 [2024-11-07 10:35:20.019414] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:52.486 [2024-11-07 10:35:20.099025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:52.486 [2024-11-07 10:35:20.142094] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:52.486 [2024-11-07 10:35:20.142133] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:52.486 [2024-11-07 10:35:20.142143] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:52.486 [2024-11-07 10:35:20.142151] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:52.486 [2024-11-07 10:35:20.142158] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:52.486 [2024-11-07 10:35:20.143801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.486 [2024-11-07 10:35:20.143821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:52.486 [2024-11-07 10:35:20.143903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:52.486 [2024-11-07 10:35:20.143905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.425 10:35:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:53.425 10:35:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:06:53.425 10:35:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:53.425 10:35:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:53.425 10:35:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:53.425 10:35:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:53.425 10:35:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:06:53.425 10:35:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.425 10:35:20 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:53.425 [2024-11-07 10:35:20.935394] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x23d40f0/0x23d85e0) succeed. 00:06:53.425 [2024-11-07 10:35:20.945070] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x23d5780/0x2419c80) succeed. 00:06:53.425 10:35:21 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.425 10:35:21 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:53.425 10:35:21 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:53.425 10:35:21 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:53.425 10:35:21 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:53.425 10:35:21 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:53.425 10:35:21 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:53.425 10:35:21 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.425 10:35:21 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:53.684 Malloc0 00:06:53.684 [2024-11-07 10:35:21.136684] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:53.684 10:35:21 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.684 10:35:21 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:53.684 10:35:21 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:53.684 10:35:21 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:53.684 10:35:21 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3639280 00:06:53.684 10:35:21 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3639280 /var/tmp/bdevperf.sock 00:06:53.684 10:35:21 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 3639280 ']' 00:06:53.684 10:35:21 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:53.684 10:35:21 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:53.684 10:35:21 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:53.684 10:35:21 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:53.684 10:35:21 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:53.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:53.684 10:35:21 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:53.684 10:35:21 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:53.684 10:35:21 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:53.684 10:35:21 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:53.684 10:35:21 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:53.684 10:35:21 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:53.684 { 00:06:53.684 "params": { 00:06:53.684 "name": "Nvme$subsystem", 00:06:53.684 "trtype": "$TEST_TRANSPORT", 00:06:53.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:53.684 "adrfam": "ipv4", 00:06:53.684 "trsvcid": "$NVMF_PORT", 00:06:53.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:53.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:53.684 "hdgst": ${hdgst:-false}, 00:06:53.684 "ddgst": ${ddgst:-false} 00:06:53.684 }, 00:06:53.684 "method": "bdev_nvme_attach_controller" 00:06:53.684 } 00:06:53.684 EOF 00:06:53.684 )") 00:06:53.684 10:35:21 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:53.684 10:35:21 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:53.684 10:35:21 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:53.684 10:35:21 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:53.684 "params": { 00:06:53.684 "name": "Nvme0", 00:06:53.684 "trtype": "rdma", 00:06:53.684 "traddr": "192.168.100.8", 00:06:53.684 "adrfam": "ipv4", 00:06:53.684 "trsvcid": "4420", 00:06:53.684 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:53.684 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:53.684 "hdgst": false, 00:06:53.684 "ddgst": false 00:06:53.684 }, 00:06:53.684 "method": "bdev_nvme_attach_controller" 00:06:53.684 }' 00:06:53.684 [2024-11-07 10:35:21.243591] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:06:53.684 [2024-11-07 10:35:21.243641] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3639280 ] 00:06:53.684 [2024-11-07 10:35:21.321237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.944 [2024-11-07 10:35:21.361181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.944 Running I/O for 10 seconds... 00:06:54.513 10:35:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:54.513 10:35:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:06:54.513 10:35:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:54.513 10:35:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.513 10:35:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:54.513 10:35:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.513 10:35:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:54.513 10:35:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:54.513 10:35:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:54.513 10:35:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:54.513 10:35:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:54.513 10:35:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:54.513 10:35:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:54.513 10:35:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:54.514 10:35:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:54.514 10:35:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:54.514 10:35:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.514 10:35:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:54.514 10:35:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.514 10:35:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1772 00:06:54.514 10:35:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1772 -ge 100 ']' 00:06:54.514 10:35:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:54.514 10:35:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:54.514 10:35:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:54.514 10:35:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:54.514 10:35:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.514 10:35:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:54.514 10:35:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.514 10:35:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:54.514 10:35:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.514 10:35:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:54.514 10:35:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.514 10:35:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:55.652 1897.00 IOPS, 118.56 MiB/s [2024-11-07T09:35:23.323Z] [2024-11-07 10:35:23.170646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:114688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bdff80 len:0x10000 key:0x182100 00:06:55.652 [2024-11-07 10:35:23.170681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.652 [2024-11-07 10:35:23.170699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:114816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bcff00 len:0x10000 key:0x182100 00:06:55.652 [2024-11-07 10:35:23.170709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.652 [2024-11-07 10:35:23.170720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bbfe80 len:0x10000 key:0x182100 00:06:55.652 [2024-11-07 10:35:23.170730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.652 [2024-11-07 10:35:23.170741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:115072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bafe00 len:0x10000 key:0x182100 00:06:55.652 [2024-11-07 10:35:23.170750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.652 [2024-11-07 10:35:23.170761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:115200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b9fd80 len:0x10000 key:0x182100 00:06:55.652 [2024-11-07 10:35:23.170770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.652 [2024-11-07 10:35:23.170781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:115328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b8fd00 len:0x10000 key:0x182100 00:06:55.652 [2024-11-07 10:35:23.170790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.652 [2024-11-07 10:35:23.170801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:115456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b7fc80 len:0x10000 key:0x182100 00:06:55.652 [2024-11-07 10:35:23.170809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.652 [2024-11-07 10:35:23.170820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:115584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b6fc00 len:0x10000 key:0x182100 00:06:55.652 [2024-11-07 10:35:23.170829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.652 [2024-11-07 10:35:23.170839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:115712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b5fb80 len:0x10000 key:0x182100 00:06:55.652 [2024-11-07 10:35:23.170849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.652 [2024-11-07 10:35:23.170859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:115840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b4fb00 len:0x10000 key:0x182100 00:06:55.652 [2024-11-07 10:35:23.170873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.652 [2024-11-07 10:35:23.170884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:115968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b3fa80 len:0x10000 key:0x182100 00:06:55.652 [2024-11-07 10:35:23.170893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.652 [2024-11-07 10:35:23.170904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:116096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b2fa00 len:0x10000 key:0x182100 00:06:55.652 [2024-11-07 10:35:23.170913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.652 [2024-11-07 10:35:23.170924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:116224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b1f980 len:0x10000 key:0x182100 00:06:55.652 [2024-11-07 10:35:23.170933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.652 [2024-11-07 10:35:23.170944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:116352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b0f900 len:0x10000 key:0x182100 00:06:55.652 [2024-11-07 10:35:23.170952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.652 [2024-11-07 10:35:23.170963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:116480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000aff880 len:0x10000 key:0x182100 00:06:55.652 [2024-11-07 10:35:23.170972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.652 [2024-11-07 10:35:23.170982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:116608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000aef800 len:0x10000 key:0x182100 00:06:55.652 [2024-11-07 10:35:23.170991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.652 [2024-11-07 10:35:23.171001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:116736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000adf780 len:0x10000 key:0x182100 00:06:55.652 [2024-11-07 10:35:23.171010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.652 [2024-11-07 10:35:23.171024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:116864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000acf700 len:0x10000 key:0x182100 00:06:55.652 [2024-11-07 10:35:23.171033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.653 [2024-11-07 10:35:23.171044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:116992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000abf680 len:0x10000 key:0x182100 00:06:55.653 [2024-11-07 10:35:23.171053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.653 [2024-11-07 10:35:23.171063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:117120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000aaf600 len:0x10000 key:0x182100 00:06:55.653 [2024-11-07 10:35:23.171072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.653 [2024-11-07 10:35:23.171083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a9f580 len:0x10000 key:0x182100 00:06:55.653 [2024-11-07 10:35:23.171094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.653 [2024-11-07 10:35:23.171104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:117376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a8f500 len:0x10000 key:0x182100 00:06:55.653 [2024-11-07 10:35:23.171113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.653 [2024-11-07 10:35:23.171123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:117504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a7f480 len:0x10000 key:0x182100 00:06:55.653 [2024-11-07 10:35:23.171132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.653 [2024-11-07 10:35:23.171143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:117632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a6f400 len:0x10000 key:0x182100 00:06:55.653 [2024-11-07 10:35:23.171151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.653 [2024-11-07 10:35:23.171162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:117760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a5f380 len:0x10000 key:0x182100 00:06:55.653 [2024-11-07 10:35:23.171171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.653 [2024-11-07 10:35:23.171181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:117888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a4f300 len:0x10000 key:0x182100 00:06:55.653 [2024-11-07 10:35:23.171190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.653 [2024-11-07 10:35:23.171200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:118016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a3f280 len:0x10000 key:0x182100 00:06:55.653 [2024-11-07 10:35:23.171209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.653 [2024-11-07 10:35:23.171220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:118144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a2f200 len:0x10000 key:0x182100 00:06:55.653 [2024-11-07 10:35:23.171228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.653 [2024-11-07 10:35:23.171239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:118272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a1f180 len:0x10000 key:0x182100 00:06:55.653 [2024-11-07 10:35:23.171247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.653 [2024-11-07 10:35:23.171257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:118400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a0f100 len:0x10000 key:0x182100 00:06:55.653 [2024-11-07 10:35:23.171266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.653 [2024-11-07 10:35:23.171276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:118528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000df0000 len:0x10000 key:0x182000 00:06:55.653 [2024-11-07 10:35:23.171285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.653 [2024-11-07 10:35:23.171295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:118656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000ddff80 len:0x10000 key:0x182000 00:06:55.653 [2024-11-07 10:35:23.171305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.653 [2024-11-07 10:35:23.171316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:118784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000dcff00 len:0x10000 key:0x182000 00:06:55.653 [2024-11-07 10:35:23.171324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.653 [2024-11-07 10:35:23.171336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:118912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000dbfe80 len:0x10000 key:0x182000 00:06:55.653 [2024-11-07 10:35:23.171346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.653 [2024-11-07 10:35:23.171356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:119040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000dafe00 len:0x10000 key:0x182000 00:06:55.653 [2024-11-07 10:35:23.171365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.653 [2024-11-07 10:35:23.171376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:119168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d9fd80 len:0x10000 key:0x182000 00:06:55.653 [2024-11-07 10:35:23.171384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.653 [2024-11-07 10:35:23.171395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:119296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d8fd00 len:0x10000 key:0x182000 00:06:55.653 [2024-11-07 10:35:23.171403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.653 [2024-11-07 10:35:23.171414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:119424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d7fc80 len:0x10000 key:0x182000 00:06:55.653 [2024-11-07 10:35:23.171422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.653 [2024-11-07 10:35:23.171433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:119552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d6fc00 len:0x10000 key:0x182000 00:06:55.653 [2024-11-07 10:35:23.171442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.653 [2024-11-07 10:35:23.171452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:119680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d5fb80 len:0x10000 key:0x182000 00:06:55.653 [2024-11-07 10:35:23.171461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.653 [2024-11-07 10:35:23.171471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:119808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000d4fb00 len:0x10000 key:0x182000 00:06:55.653 [2024-11-07 10:35:23.171480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.653 [2024-11-07 10:35:23.171490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:111744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b7a0000 len:0x10000 key:0x182b00 00:06:55.653 [2024-11-07 10:35:23.171499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.653 [2024-11-07 10:35:23.171514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:111872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b7c1000 len:0x10000 key:0x182b00 00:06:55.653 [2024-11-07 10:35:23.171523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.653 [2024-11-07 10:35:23.171538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b170000 len:0x10000 key:0x182b00 00:06:55.653 [2024-11-07 10:35:23.171548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.653 [2024-11-07 10:35:23.171558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:112128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b191000 len:0x10000 key:0x182b00 00:06:55.653 [2024-11-07 10:35:23.171567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.653 [2024-11-07 10:35:23.171577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ab40000 len:0x10000 key:0x182b00 00:06:55.653 [2024-11-07 10:35:23.171586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.653 [2024-11-07 10:35:23.171597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:112384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ab61000 len:0x10000 key:0x182b00 00:06:55.653 [2024-11-07 10:35:23.171606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.653 [2024-11-07 10:35:23.171617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:112512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a510000 len:0x10000 key:0x182b00 00:06:55.653 [2024-11-07 10:35:23.171626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.653 [2024-11-07 10:35:23.171636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:112640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a531000 len:0x10000 key:0x182b00 00:06:55.653 [2024-11-07 10:35:23.171645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.653 [2024-11-07 10:35:23.171656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:112768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009ee0000 len:0x10000 key:0x182b00 00:06:55.653 [2024-11-07 10:35:23.171666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.653 [2024-11-07 10:35:23.171676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:112896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf7d000 len:0x10000 key:0x182b00 00:06:55.653 [2024-11-07 10:35:23.171685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.653 [2024-11-07 10:35:23.171696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:113024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf5c000 len:0x10000 key:0x182b00 00:06:55.653 [2024-11-07 10:35:23.171705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.653 [2024-11-07 10:35:23.171715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:113152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf3b000 len:0x10000 key:0x182b00 00:06:55.653 [2024-11-07 10:35:23.171724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.653 [2024-11-07 10:35:23.171735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf1a000 len:0x10000 key:0x182b00 00:06:55.653 [2024-11-07 10:35:23.171744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.654 [2024-11-07 10:35:23.171756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:113408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bef9000 len:0x10000 key:0x182b00 00:06:55.654 [2024-11-07 10:35:23.171765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.654 [2024-11-07 10:35:23.171775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:113536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bed8000 len:0x10000 key:0x182b00 00:06:55.654 [2024-11-07 10:35:23.171784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.654 [2024-11-07 10:35:23.171795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:113664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000beb7000 len:0x10000 key:0x182b00 00:06:55.654 [2024-11-07 10:35:23.171804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.654 [2024-11-07 10:35:23.171814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:113792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000aa38000 len:0x10000 key:0x182b00 00:06:55.654 [2024-11-07 10:35:23.171823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.654 [2024-11-07 10:35:23.171834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:113920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a3e7000 len:0x10000 key:0x182b00 00:06:55.654 [2024-11-07 10:35:23.171843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.654 [2024-11-07 10:35:23.171853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:114048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be96000 len:0x10000 key:0x182b00 00:06:55.654 [2024-11-07 10:35:23.171862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.654 [2024-11-07 10:35:23.171873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:114176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be75000 len:0x10000 key:0x182b00 00:06:55.654 [2024-11-07 10:35:23.171881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.654 [2024-11-07 10:35:23.171892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be54000 len:0x10000 key:0x182b00 00:06:55.654 [2024-11-07 10:35:23.171901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.654 [2024-11-07 10:35:23.171911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:114432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be33000 len:0x10000 key:0x182b00 00:06:55.654 [2024-11-07 10:35:23.171920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.654 [2024-11-07 10:35:23.171930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:114560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be12000 len:0x10000 key:0x182b00 00:06:55.654 [2024-11-07 10:35:23.171939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:66f1f000 sqhd:7250 p:0 m:0 dnr:0 00:06:55.654 [2024-11-07 10:35:23.174632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:55.654 task offset: 114688 on job bdev=Nvme0n1 fails 00:06:55.654 00:06:55.654 Latency(us) 00:06:55.654 [2024-11-07T09:35:23.325Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:55.654 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:55.654 Job: Nvme0n1 ended in about 1.63 seconds with error 00:06:55.654 Verification LBA range: start 0x0 length 0x400 00:06:55.654 Nvme0n1 : 1.63 1163.60 72.73 39.26 0.00 52712.32 2031.62 1020054.73 00:06:55.654 [2024-11-07T09:35:23.325Z] =================================================================================================================== 00:06:55.654 [2024-11-07T09:35:23.325Z] Total : 1163.60 72.73 39.26 0.00 52712.32 2031.62 1020054.73 00:06:55.654 [2024-11-07 10:35:23.177107] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:55.654 10:35:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3639280 00:06:55.654 10:35:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:55.654 10:35:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:55.654 10:35:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:55.654 10:35:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:55.654 10:35:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:55.654 10:35:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:55.654 10:35:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:55.654 { 00:06:55.654 "params": { 00:06:55.654 "name": "Nvme$subsystem", 00:06:55.654 "trtype": "$TEST_TRANSPORT", 00:06:55.654 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:55.654 "adrfam": "ipv4", 00:06:55.654 "trsvcid": "$NVMF_PORT", 00:06:55.654 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:55.654 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:55.654 "hdgst": ${hdgst:-false}, 00:06:55.654 "ddgst": ${ddgst:-false} 00:06:55.654 }, 00:06:55.654 "method": "bdev_nvme_attach_controller" 00:06:55.654 } 00:06:55.654 EOF 00:06:55.654 )") 00:06:55.654 10:35:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:55.654 10:35:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:55.654 10:35:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:55.654 10:35:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:55.654 "params": { 00:06:55.654 "name": "Nvme0", 00:06:55.654 "trtype": "rdma", 00:06:55.654 "traddr": "192.168.100.8", 00:06:55.654 "adrfam": "ipv4", 00:06:55.654 "trsvcid": "4420", 00:06:55.654 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:55.654 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:55.654 "hdgst": false, 00:06:55.654 "ddgst": false 00:06:55.654 }, 00:06:55.654 "method": "bdev_nvme_attach_controller" 00:06:55.654 }' 00:06:55.654 [2024-11-07 10:35:23.230638] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:06:55.654 [2024-11-07 10:35:23.230690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3639622 ] 00:06:55.654 [2024-11-07 10:35:23.307350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.914 [2024-11-07 10:35:23.347448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.914 Running I/O for 1 seconds... 00:06:57.293 3093.00 IOPS, 193.31 MiB/s 00:06:57.293 Latency(us) 00:06:57.293 [2024-11-07T09:35:24.964Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:57.293 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:57.293 Verification LBA range: start 0x0 length 0x400 00:06:57.293 Nvme0n1 : 1.01 3130.41 195.65 0.00 0.00 20037.45 950.27 33973.86 00:06:57.293 [2024-11-07T09:35:24.964Z] =================================================================================================================== 00:06:57.293 [2024-11-07T09:35:24.964Z] Total : 3130.41 195.65 0.00 0.00 20037.45 950.27 33973.86 00:06:57.293 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 3639280 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:06:57.293 10:35:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:57.293 10:35:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:57.293 10:35:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:57.293 10:35:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:57.293 10:35:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:57.293 10:35:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:57.293 10:35:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:57.293 10:35:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:06:57.293 10:35:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:06:57.293 10:35:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:57.293 10:35:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:57.293 10:35:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:06:57.293 rmmod nvme_rdma 00:06:57.293 rmmod nvme_fabrics 00:06:57.293 10:35:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:57.293 10:35:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:57.293 10:35:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:57.293 10:35:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3638995 ']' 00:06:57.293 10:35:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3638995 00:06:57.293 10:35:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 3638995 ']' 00:06:57.293 10:35:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 3638995 00:06:57.293 10:35:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:06:57.293 10:35:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:57.293 10:35:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3638995 00:06:57.294 10:35:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:06:57.294 10:35:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:06:57.294 10:35:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3638995' 00:06:57.294 killing process with pid 3638995 00:06:57.294 10:35:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 3638995 00:06:57.294 10:35:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 3638995 00:06:57.553 [2024-11-07 10:35:25.082974] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:57.553 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:57.553 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:06:57.553 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:57.553 00:06:57.553 real 0m12.193s 00:06:57.553 user 0m25.190s 00:06:57.553 sys 0m6.329s 00:06:57.553 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:57.553 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:57.553 ************************************ 00:06:57.553 END TEST nvmf_host_management 00:06:57.553 ************************************ 00:06:57.553 10:35:25 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:06:57.553 10:35:25 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:57.553 10:35:25 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:57.553 10:35:25 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:57.553 ************************************ 00:06:57.553 START TEST nvmf_lvol 00:06:57.553 ************************************ 00:06:57.553 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:06:57.813 * Looking for test storage... 00:06:57.813 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:57.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.813 --rc genhtml_branch_coverage=1 00:06:57.813 --rc genhtml_function_coverage=1 00:06:57.813 --rc genhtml_legend=1 00:06:57.813 --rc geninfo_all_blocks=1 00:06:57.813 --rc geninfo_unexecuted_blocks=1 00:06:57.813 00:06:57.813 ' 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:57.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.813 --rc genhtml_branch_coverage=1 00:06:57.813 --rc genhtml_function_coverage=1 00:06:57.813 --rc genhtml_legend=1 00:06:57.813 --rc geninfo_all_blocks=1 00:06:57.813 --rc geninfo_unexecuted_blocks=1 00:06:57.813 00:06:57.813 ' 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:57.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.813 --rc genhtml_branch_coverage=1 00:06:57.813 --rc genhtml_function_coverage=1 00:06:57.813 --rc genhtml_legend=1 00:06:57.813 --rc geninfo_all_blocks=1 00:06:57.813 --rc geninfo_unexecuted_blocks=1 00:06:57.813 00:06:57.813 ' 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:57.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.813 --rc genhtml_branch_coverage=1 00:06:57.813 --rc genhtml_function_coverage=1 00:06:57.813 --rc genhtml_legend=1 00:06:57.813 --rc geninfo_all_blocks=1 00:06:57.813 --rc geninfo_unexecuted_blocks=1 00:06:57.813 00:06:57.813 ' 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:57.813 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:57.814 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:57.814 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:57.814 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:57.814 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:57.814 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:57.814 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:57.814 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.814 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.814 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.814 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:57.814 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.814 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:57.814 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:57.814 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:57.814 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:57.814 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:57.814 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:57.814 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:57.814 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:57.814 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:57.814 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:57.814 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:57.814 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:57.814 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:57.814 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:57.814 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:57.814 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:57.814 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:57.814 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:06:57.814 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:57.814 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:57.814 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:57.814 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:57.814 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:57.814 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:57.814 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:57.814 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:57.814 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:57.814 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:57.814 10:35:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:07:04.380 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:07:04.380 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:07:04.380 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:07:04.381 Found net devices under 0000:d9:00.0: mlx_0_0 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:07:04.381 Found net devices under 0000:d9:00.1: mlx_0_1 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # rdma_device_init 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # uname 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@530 -- # allocate_nic_ips 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:04.381 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:04.381 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:07:04.381 altname enp217s0f0np0 00:07:04.381 altname ens818f0np0 00:07:04.381 inet 192.168.100.8/24 scope global mlx_0_0 00:07:04.381 valid_lft forever preferred_lft forever 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:04.381 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:04.381 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:07:04.381 altname enp217s0f1np1 00:07:04.381 altname ens818f1np1 00:07:04.381 inet 192.168.100.9/24 scope global mlx_0_1 00:07:04.381 valid_lft forever preferred_lft forever 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:07:04.381 192.168.100.9' 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:07:04.381 192.168.100.9' 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # head -n 1 00:07:04.381 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:04.382 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:07:04.382 192.168.100.9' 00:07:04.382 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # tail -n +2 00:07:04.382 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # head -n 1 00:07:04.382 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:04.382 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:07:04.382 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:04.382 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:07:04.382 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:07:04.382 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:07:04.382 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:04.382 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:04.382 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:04.382 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:04.382 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3643221 00:07:04.382 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:04.382 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3643221 00:07:04.382 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 3643221 ']' 00:07:04.382 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.382 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:04.382 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.382 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:04.382 10:35:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:04.382 [2024-11-07 10:35:31.866268] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:07:04.382 [2024-11-07 10:35:31.866329] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:04.382 [2024-11-07 10:35:31.942897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:04.382 [2024-11-07 10:35:31.982374] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:04.382 [2024-11-07 10:35:31.982412] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:04.382 [2024-11-07 10:35:31.982421] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:04.382 [2024-11-07 10:35:31.982430] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:04.382 [2024-11-07 10:35:31.982437] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:04.382 [2024-11-07 10:35:31.983862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.382 [2024-11-07 10:35:31.983958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.382 [2024-11-07 10:35:31.983960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.641 10:35:32 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:04.641 10:35:32 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:07:04.641 10:35:32 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:04.641 10:35:32 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:04.641 10:35:32 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:04.641 10:35:32 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:04.641 10:35:32 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:04.641 [2024-11-07 10:35:32.309095] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x22e1270/0x22e5760) succeed. 00:07:04.899 [2024-11-07 10:35:32.318084] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x22e2860/0x2326e00) succeed. 00:07:04.899 10:35:32 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:05.157 10:35:32 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:05.157 10:35:32 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:05.415 10:35:32 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:05.415 10:35:32 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:05.415 10:35:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:05.673 10:35:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=4a32d063-f7e0-4b20-ab4b-9027a5d1d167 00:07:05.673 10:35:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4a32d063-f7e0-4b20-ab4b-9027a5d1d167 lvol 20 00:07:05.932 10:35:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=2dfa860d-bdd8-4df7-8b81-7e5d19a62448 00:07:05.932 10:35:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:06.190 10:35:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2dfa860d-bdd8-4df7-8b81-7e5d19a62448 00:07:06.190 10:35:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:07:06.447 [2024-11-07 10:35:34.011059] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:06.447 10:35:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:06.705 10:35:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3643631 00:07:06.705 10:35:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:06.705 10:35:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:07.640 10:35:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 2dfa860d-bdd8-4df7-8b81-7e5d19a62448 MY_SNAPSHOT 00:07:07.899 10:35:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=2e6f0fa4-fda7-46ea-a884-49f8f3c98dfa 00:07:07.899 10:35:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 2dfa860d-bdd8-4df7-8b81-7e5d19a62448 30 00:07:08.157 10:35:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 2e6f0fa4-fda7-46ea-a884-49f8f3c98dfa MY_CLONE 00:07:08.416 10:35:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=78351041-ad38-42d1-8fb7-862fc881c1d3 00:07:08.416 10:35:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 78351041-ad38-42d1-8fb7-862fc881c1d3 00:07:08.674 10:35:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3643631 00:07:18.644 Initializing NVMe Controllers 00:07:18.644 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:07:18.644 Controller IO queue size 128, less than required. 00:07:18.644 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:18.644 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:18.644 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:18.644 Initialization complete. Launching workers. 00:07:18.644 ======================================================== 00:07:18.644 Latency(us) 00:07:18.644 Device Information : IOPS MiB/s Average min max 00:07:18.644 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16670.30 65.12 7680.02 2376.85 46990.79 00:07:18.644 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16561.80 64.69 7729.58 3342.26 50770.38 00:07:18.644 ======================================================== 00:07:18.644 Total : 33232.09 129.81 7704.72 2376.85 50770.38 00:07:18.644 00:07:18.644 10:35:45 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:18.644 10:35:45 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2dfa860d-bdd8-4df7-8b81-7e5d19a62448 00:07:18.644 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4a32d063-f7e0-4b20-ab4b-9027a5d1d167 00:07:18.644 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:18.644 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:18.644 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:18.644 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:18.644 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:18.644 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:18.644 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:18.644 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:18.644 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:18.644 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:18.644 rmmod nvme_rdma 00:07:18.644 rmmod nvme_fabrics 00:07:18.644 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:18.644 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:18.644 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:18.644 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3643221 ']' 00:07:18.644 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3643221 00:07:18.644 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 3643221 ']' 00:07:18.644 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 3643221 00:07:18.644 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:07:18.903 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:18.903 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3643221 00:07:18.903 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:18.903 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:18.903 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3643221' 00:07:18.903 killing process with pid 3643221 00:07:18.903 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 3643221 00:07:18.903 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 3643221 00:07:19.162 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:19.162 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:07:19.162 00:07:19.162 real 0m21.478s 00:07:19.162 user 1m10.332s 00:07:19.162 sys 0m6.224s 00:07:19.162 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:19.162 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:19.162 ************************************ 00:07:19.162 END TEST nvmf_lvol 00:07:19.162 ************************************ 00:07:19.162 10:35:46 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:07:19.162 10:35:46 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:19.162 10:35:46 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:19.162 10:35:46 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:19.162 ************************************ 00:07:19.162 START TEST nvmf_lvs_grow 00:07:19.162 ************************************ 00:07:19.162 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:07:19.162 * Looking for test storage... 00:07:19.162 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:19.162 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:19.162 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:07:19.162 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:19.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.422 --rc genhtml_branch_coverage=1 00:07:19.422 --rc genhtml_function_coverage=1 00:07:19.422 --rc genhtml_legend=1 00:07:19.422 --rc geninfo_all_blocks=1 00:07:19.422 --rc geninfo_unexecuted_blocks=1 00:07:19.422 00:07:19.422 ' 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:19.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.422 --rc genhtml_branch_coverage=1 00:07:19.422 --rc genhtml_function_coverage=1 00:07:19.422 --rc genhtml_legend=1 00:07:19.422 --rc geninfo_all_blocks=1 00:07:19.422 --rc geninfo_unexecuted_blocks=1 00:07:19.422 00:07:19.422 ' 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:19.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.422 --rc genhtml_branch_coverage=1 00:07:19.422 --rc genhtml_function_coverage=1 00:07:19.422 --rc genhtml_legend=1 00:07:19.422 --rc geninfo_all_blocks=1 00:07:19.422 --rc geninfo_unexecuted_blocks=1 00:07:19.422 00:07:19.422 ' 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:19.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.422 --rc genhtml_branch_coverage=1 00:07:19.422 --rc genhtml_function_coverage=1 00:07:19.422 --rc genhtml_legend=1 00:07:19.422 --rc geninfo_all_blocks=1 00:07:19.422 --rc geninfo_unexecuted_blocks=1 00:07:19.422 00:07:19.422 ' 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:19.422 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:19.423 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.423 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.423 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.423 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:19.423 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.423 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:19.423 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:19.423 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:19.423 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:19.423 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:19.423 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:19.423 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:19.423 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:19.423 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:19.423 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:19.423 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:19.423 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:19.423 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:19.423 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:19.423 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:07:19.423 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:19.423 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:19.423 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:19.423 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:19.423 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:19.423 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:19.423 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:19.423 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:19.423 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:19.423 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:19.423 10:35:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:25.992 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:25.992 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:25.992 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:25.992 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:25.992 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:25.992 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:25.992 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:25.992 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:25.992 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:25.992 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:25.992 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:25.992 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:25.992 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:25.992 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:25.992 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:25.992 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:25.992 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:25.992 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:25.992 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:25.992 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:25.992 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:25.992 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:25.992 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:25.992 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:25.992 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:25.992 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:25.992 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:25.992 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:25.992 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:25.992 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:25.992 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:25.992 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:07:25.992 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:07:25.992 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:07:25.993 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:07:25.993 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:07:25.993 Found net devices under 0000:d9:00.0: mlx_0_0 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:07:25.993 Found net devices under 0000:d9:00.1: mlx_0_1 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # rdma_device_init 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # uname 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@530 -- # allocate_nic_ips 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:25.993 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:25.993 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:07:25.993 altname enp217s0f0np0 00:07:25.993 altname ens818f0np0 00:07:25.993 inet 192.168.100.8/24 scope global mlx_0_0 00:07:25.993 valid_lft forever preferred_lft forever 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:25.993 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:25.993 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:07:25.993 altname enp217s0f1np1 00:07:25.993 altname ens818f1np1 00:07:25.993 inet 192.168.100.9/24 scope global mlx_0_1 00:07:25.993 valid_lft forever preferred_lft forever 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:07:25.993 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:25.994 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:25.994 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:25.994 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:25.994 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:25.994 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:25.994 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:07:25.994 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:25.994 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:25.994 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:25.994 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:25.994 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:25.994 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:25.994 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:25.994 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:25.994 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:25.994 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:25.994 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:25.994 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:25.994 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:07:25.994 192.168.100.9' 00:07:25.994 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:07:25.994 192.168.100.9' 00:07:25.994 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # head -n 1 00:07:25.994 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:25.994 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:07:25.994 192.168.100.9' 00:07:25.994 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # tail -n +2 00:07:25.994 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # head -n 1 00:07:25.994 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:25.994 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:07:25.994 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:25.994 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:07:25.994 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:07:25.994 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:07:25.994 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:25.994 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:25.994 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:25.994 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:25.994 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3649015 00:07:25.994 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:25.994 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3649015 00:07:25.994 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 3649015 ']' 00:07:25.994 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.994 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:25.994 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.994 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:25.994 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:25.994 [2024-11-07 10:35:53.622379] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:07:25.994 [2024-11-07 10:35:53.622431] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:26.253 [2024-11-07 10:35:53.696293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.253 [2024-11-07 10:35:53.734629] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:26.253 [2024-11-07 10:35:53.734667] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:26.253 [2024-11-07 10:35:53.734677] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:26.253 [2024-11-07 10:35:53.734685] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:26.253 [2024-11-07 10:35:53.734692] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:26.253 [2024-11-07 10:35:53.735275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.253 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:26.253 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:07:26.253 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:26.253 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:26.253 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:26.253 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:26.253 10:35:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:26.511 [2024-11-07 10:35:54.067679] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2456b80/0x245b070) succeed. 00:07:26.511 [2024-11-07 10:35:54.076806] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2458030/0x249c710) succeed. 00:07:26.511 10:35:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:26.511 10:35:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:26.511 10:35:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:26.511 10:35:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:26.511 ************************************ 00:07:26.511 START TEST lvs_grow_clean 00:07:26.511 ************************************ 00:07:26.511 10:35:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:07:26.511 10:35:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:26.511 10:35:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:26.511 10:35:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:26.511 10:35:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:26.511 10:35:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:26.512 10:35:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:26.512 10:35:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:26.512 10:35:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:26.512 10:35:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:26.770 10:35:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:26.770 10:35:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:27.028 10:35:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=7686ff48-d055-41af-9472-0c68651e1312 00:07:27.028 10:35:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7686ff48-d055-41af-9472-0c68651e1312 00:07:27.028 10:35:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:27.286 10:35:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:27.286 10:35:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:27.286 10:35:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7686ff48-d055-41af-9472-0c68651e1312 lvol 150 00:07:27.286 10:35:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=0754ce72-2894-4549-a3e3-eaf67f24abce 00:07:27.286 10:35:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:27.286 10:35:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:27.543 [2024-11-07 10:35:55.058917] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:27.543 [2024-11-07 10:35:55.058972] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:27.543 true 00:07:27.544 10:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7686ff48-d055-41af-9472-0c68651e1312 00:07:27.544 10:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:27.801 10:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:27.801 10:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:27.801 10:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0754ce72-2894-4549-a3e3-eaf67f24abce 00:07:28.059 10:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:07:28.317 [2024-11-07 10:35:55.753189] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:28.318 10:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:28.318 10:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3649369 00:07:28.318 10:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:28.318 10:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:28.318 10:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3649369 /var/tmp/bdevperf.sock 00:07:28.318 10:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 3649369 ']' 00:07:28.318 10:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:28.318 10:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:28.318 10:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:28.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:28.318 10:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:28.318 10:35:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:28.318 [2024-11-07 10:35:55.972152] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:07:28.318 [2024-11-07 10:35:55.972203] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3649369 ] 00:07:28.615 [2024-11-07 10:35:56.049252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.615 [2024-11-07 10:35:56.091254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.615 10:35:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:28.615 10:35:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:07:28.616 10:35:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:28.915 Nvme0n1 00:07:28.915 10:35:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:29.173 [ 00:07:29.174 { 00:07:29.174 "name": "Nvme0n1", 00:07:29.174 "aliases": [ 00:07:29.174 "0754ce72-2894-4549-a3e3-eaf67f24abce" 00:07:29.174 ], 00:07:29.174 "product_name": "NVMe disk", 00:07:29.174 "block_size": 4096, 00:07:29.174 "num_blocks": 38912, 00:07:29.174 "uuid": "0754ce72-2894-4549-a3e3-eaf67f24abce", 00:07:29.174 "numa_id": 1, 00:07:29.174 "assigned_rate_limits": { 00:07:29.174 "rw_ios_per_sec": 0, 00:07:29.174 "rw_mbytes_per_sec": 0, 00:07:29.174 "r_mbytes_per_sec": 0, 00:07:29.174 "w_mbytes_per_sec": 0 00:07:29.174 }, 00:07:29.174 "claimed": false, 00:07:29.174 "zoned": false, 00:07:29.174 "supported_io_types": { 00:07:29.174 "read": true, 00:07:29.174 "write": true, 00:07:29.174 "unmap": true, 00:07:29.174 "flush": true, 00:07:29.174 "reset": true, 00:07:29.174 "nvme_admin": true, 00:07:29.174 "nvme_io": true, 00:07:29.174 "nvme_io_md": false, 00:07:29.174 "write_zeroes": true, 00:07:29.174 "zcopy": false, 00:07:29.174 "get_zone_info": false, 00:07:29.174 "zone_management": false, 00:07:29.174 "zone_append": false, 00:07:29.174 "compare": true, 00:07:29.174 "compare_and_write": true, 00:07:29.174 "abort": true, 00:07:29.174 "seek_hole": false, 00:07:29.174 "seek_data": false, 00:07:29.174 "copy": true, 00:07:29.174 "nvme_iov_md": false 00:07:29.174 }, 00:07:29.174 "memory_domains": [ 00:07:29.174 { 00:07:29.174 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:07:29.174 "dma_device_type": 0 00:07:29.174 } 00:07:29.174 ], 00:07:29.174 "driver_specific": { 00:07:29.174 "nvme": [ 00:07:29.174 { 00:07:29.174 "trid": { 00:07:29.174 "trtype": "RDMA", 00:07:29.174 "adrfam": "IPv4", 00:07:29.174 "traddr": "192.168.100.8", 00:07:29.174 "trsvcid": "4420", 00:07:29.174 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:29.174 }, 00:07:29.174 "ctrlr_data": { 00:07:29.174 "cntlid": 1, 00:07:29.174 "vendor_id": "0x8086", 00:07:29.174 "model_number": "SPDK bdev Controller", 00:07:29.174 "serial_number": "SPDK0", 00:07:29.174 "firmware_revision": "25.01", 00:07:29.174 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:29.174 "oacs": { 00:07:29.174 "security": 0, 00:07:29.174 "format": 0, 00:07:29.174 "firmware": 0, 00:07:29.174 "ns_manage": 0 00:07:29.174 }, 00:07:29.174 "multi_ctrlr": true, 00:07:29.174 "ana_reporting": false 00:07:29.174 }, 00:07:29.174 "vs": { 00:07:29.174 "nvme_version": "1.3" 00:07:29.174 }, 00:07:29.174 "ns_data": { 00:07:29.174 "id": 1, 00:07:29.174 "can_share": true 00:07:29.174 } 00:07:29.174 } 00:07:29.174 ], 00:07:29.174 "mp_policy": "active_passive" 00:07:29.174 } 00:07:29.174 } 00:07:29.174 ] 00:07:29.174 10:35:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3649585 00:07:29.174 10:35:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:29.174 10:35:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:29.174 Running I/O for 10 seconds... 00:07:30.107 Latency(us) 00:07:30.107 [2024-11-07T09:35:57.778Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:30.107 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.107 Nvme0n1 : 1.00 34725.00 135.64 0.00 0.00 0.00 0.00 0.00 00:07:30.107 [2024-11-07T09:35:57.778Z] =================================================================================================================== 00:07:30.107 [2024-11-07T09:35:57.778Z] Total : 34725.00 135.64 0.00 0.00 0.00 0.00 0.00 00:07:30.107 00:07:31.041 10:35:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7686ff48-d055-41af-9472-0c68651e1312 00:07:31.298 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:31.298 Nvme0n1 : 2.00 35184.50 137.44 0.00 0.00 0.00 0.00 0.00 00:07:31.298 [2024-11-07T09:35:58.969Z] =================================================================================================================== 00:07:31.298 [2024-11-07T09:35:58.969Z] Total : 35184.50 137.44 0.00 0.00 0.00 0.00 0.00 00:07:31.298 00:07:31.298 true 00:07:31.298 10:35:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7686ff48-d055-41af-9472-0c68651e1312 00:07:31.298 10:35:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:31.556 10:35:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:31.556 10:35:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:31.556 10:35:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3649585 00:07:32.128 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:32.128 Nvme0n1 : 3.00 35296.00 137.88 0.00 0.00 0.00 0.00 0.00 00:07:32.128 [2024-11-07T09:35:59.799Z] =================================================================================================================== 00:07:32.128 [2024-11-07T09:35:59.799Z] Total : 35296.00 137.88 0.00 0.00 0.00 0.00 0.00 00:07:32.128 00:07:33.065 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:33.065 Nvme0n1 : 4.00 35225.00 137.60 0.00 0.00 0.00 0.00 0.00 00:07:33.065 [2024-11-07T09:36:00.736Z] =================================================================================================================== 00:07:33.065 [2024-11-07T09:36:00.736Z] Total : 35225.00 137.60 0.00 0.00 0.00 0.00 0.00 00:07:33.065 00:07:34.438 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.438 Nvme0n1 : 5.00 35245.40 137.68 0.00 0.00 0.00 0.00 0.00 00:07:34.438 [2024-11-07T09:36:02.109Z] =================================================================================================================== 00:07:34.438 [2024-11-07T09:36:02.109Z] Total : 35245.40 137.68 0.00 0.00 0.00 0.00 0.00 00:07:34.438 00:07:35.372 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.372 Nvme0n1 : 6.00 35269.50 137.77 0.00 0.00 0.00 0.00 0.00 00:07:35.372 [2024-11-07T09:36:03.043Z] =================================================================================================================== 00:07:35.372 [2024-11-07T09:36:03.043Z] Total : 35269.50 137.77 0.00 0.00 0.00 0.00 0.00 00:07:35.372 00:07:36.304 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:36.304 Nvme0n1 : 7.00 35241.71 137.66 0.00 0.00 0.00 0.00 0.00 00:07:36.304 [2024-11-07T09:36:03.975Z] =================================================================================================================== 00:07:36.304 [2024-11-07T09:36:03.975Z] Total : 35241.71 137.66 0.00 0.00 0.00 0.00 0.00 00:07:36.304 00:07:37.237 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:37.237 Nvme0n1 : 8.00 35304.12 137.91 0.00 0.00 0.00 0.00 0.00 00:07:37.237 [2024-11-07T09:36:04.908Z] =================================================================================================================== 00:07:37.237 [2024-11-07T09:36:04.908Z] Total : 35304.12 137.91 0.00 0.00 0.00 0.00 0.00 00:07:37.237 00:07:38.171 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:38.171 Nvme0n1 : 9.00 35356.78 138.11 0.00 0.00 0.00 0.00 0.00 00:07:38.171 [2024-11-07T09:36:05.842Z] =================================================================================================================== 00:07:38.171 [2024-11-07T09:36:05.842Z] Total : 35356.78 138.11 0.00 0.00 0.00 0.00 0.00 00:07:38.171 00:07:39.106 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.106 Nvme0n1 : 10.00 35417.70 138.35 0.00 0.00 0.00 0.00 0.00 00:07:39.106 [2024-11-07T09:36:06.777Z] =================================================================================================================== 00:07:39.106 [2024-11-07T09:36:06.777Z] Total : 35417.70 138.35 0.00 0.00 0.00 0.00 0.00 00:07:39.106 00:07:39.106 00:07:39.106 Latency(us) 00:07:39.106 [2024-11-07T09:36:06.777Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:39.106 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.106 Nvme0n1 : 10.00 35419.29 138.36 0.00 0.00 3610.96 2490.37 15414.07 00:07:39.106 [2024-11-07T09:36:06.777Z] =================================================================================================================== 00:07:39.106 [2024-11-07T09:36:06.777Z] Total : 35419.29 138.36 0.00 0.00 3610.96 2490.37 15414.07 00:07:39.106 { 00:07:39.106 "results": [ 00:07:39.106 { 00:07:39.106 "job": "Nvme0n1", 00:07:39.106 "core_mask": "0x2", 00:07:39.106 "workload": "randwrite", 00:07:39.106 "status": "finished", 00:07:39.106 "queue_depth": 128, 00:07:39.106 "io_size": 4096, 00:07:39.106 "runtime": 10.003166, 00:07:39.106 "iops": 35419.28625397199, 00:07:39.106 "mibps": 138.3565869295781, 00:07:39.106 "io_failed": 0, 00:07:39.106 "io_timeout": 0, 00:07:39.106 "avg_latency_us": 3610.9597227518666, 00:07:39.106 "min_latency_us": 2490.368, 00:07:39.106 "max_latency_us": 15414.0672 00:07:39.106 } 00:07:39.106 ], 00:07:39.106 "core_count": 1 00:07:39.106 } 00:07:39.106 10:36:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3649369 00:07:39.106 10:36:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 3649369 ']' 00:07:39.106 10:36:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 3649369 00:07:39.106 10:36:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:07:39.106 10:36:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:39.106 10:36:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3649369 00:07:39.364 10:36:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:39.364 10:36:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:39.364 10:36:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3649369' 00:07:39.364 killing process with pid 3649369 00:07:39.364 10:36:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 3649369 00:07:39.364 Received shutdown signal, test time was about 10.000000 seconds 00:07:39.364 00:07:39.364 Latency(us) 00:07:39.364 [2024-11-07T09:36:07.035Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:39.364 [2024-11-07T09:36:07.035Z] =================================================================================================================== 00:07:39.364 [2024-11-07T09:36:07.035Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:39.364 10:36:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 3649369 00:07:39.364 10:36:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:39.623 10:36:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:39.880 10:36:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7686ff48-d055-41af-9472-0c68651e1312 00:07:39.880 10:36:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:40.138 10:36:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:40.138 10:36:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:40.139 10:36:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:40.139 [2024-11-07 10:36:07.738225] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:40.139 10:36:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7686ff48-d055-41af-9472-0c68651e1312 00:07:40.139 10:36:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:07:40.139 10:36:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7686ff48-d055-41af-9472-0c68651e1312 00:07:40.139 10:36:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:40.139 10:36:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.139 10:36:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:40.139 10:36:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.139 10:36:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:40.139 10:36:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.139 10:36:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:40.139 10:36:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:07:40.139 10:36:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7686ff48-d055-41af-9472-0c68651e1312 00:07:40.397 request: 00:07:40.397 { 00:07:40.397 "uuid": "7686ff48-d055-41af-9472-0c68651e1312", 00:07:40.397 "method": "bdev_lvol_get_lvstores", 00:07:40.397 "req_id": 1 00:07:40.397 } 00:07:40.397 Got JSON-RPC error response 00:07:40.397 response: 00:07:40.397 { 00:07:40.397 "code": -19, 00:07:40.397 "message": "No such device" 00:07:40.397 } 00:07:40.397 10:36:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:07:40.397 10:36:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:40.397 10:36:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:40.397 10:36:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:40.397 10:36:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:40.656 aio_bdev 00:07:40.656 10:36:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0754ce72-2894-4549-a3e3-eaf67f24abce 00:07:40.656 10:36:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=0754ce72-2894-4549-a3e3-eaf67f24abce 00:07:40.656 10:36:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:40.656 10:36:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:07:40.656 10:36:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:40.656 10:36:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:40.656 10:36:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:40.656 10:36:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0754ce72-2894-4549-a3e3-eaf67f24abce -t 2000 00:07:40.914 [ 00:07:40.914 { 00:07:40.914 "name": "0754ce72-2894-4549-a3e3-eaf67f24abce", 00:07:40.914 "aliases": [ 00:07:40.914 "lvs/lvol" 00:07:40.914 ], 00:07:40.914 "product_name": "Logical Volume", 00:07:40.914 "block_size": 4096, 00:07:40.914 "num_blocks": 38912, 00:07:40.914 "uuid": "0754ce72-2894-4549-a3e3-eaf67f24abce", 00:07:40.914 "assigned_rate_limits": { 00:07:40.914 "rw_ios_per_sec": 0, 00:07:40.914 "rw_mbytes_per_sec": 0, 00:07:40.914 "r_mbytes_per_sec": 0, 00:07:40.914 "w_mbytes_per_sec": 0 00:07:40.914 }, 00:07:40.914 "claimed": false, 00:07:40.914 "zoned": false, 00:07:40.914 "supported_io_types": { 00:07:40.914 "read": true, 00:07:40.914 "write": true, 00:07:40.914 "unmap": true, 00:07:40.914 "flush": false, 00:07:40.914 "reset": true, 00:07:40.914 "nvme_admin": false, 00:07:40.914 "nvme_io": false, 00:07:40.914 "nvme_io_md": false, 00:07:40.914 "write_zeroes": true, 00:07:40.914 "zcopy": false, 00:07:40.914 "get_zone_info": false, 00:07:40.914 "zone_management": false, 00:07:40.914 "zone_append": false, 00:07:40.914 "compare": false, 00:07:40.914 "compare_and_write": false, 00:07:40.914 "abort": false, 00:07:40.914 "seek_hole": true, 00:07:40.914 "seek_data": true, 00:07:40.914 "copy": false, 00:07:40.914 "nvme_iov_md": false 00:07:40.914 }, 00:07:40.914 "driver_specific": { 00:07:40.914 "lvol": { 00:07:40.914 "lvol_store_uuid": "7686ff48-d055-41af-9472-0c68651e1312", 00:07:40.914 "base_bdev": "aio_bdev", 00:07:40.914 "thin_provision": false, 00:07:40.914 "num_allocated_clusters": 38, 00:07:40.914 "snapshot": false, 00:07:40.914 "clone": false, 00:07:40.914 "esnap_clone": false 00:07:40.914 } 00:07:40.914 } 00:07:40.914 } 00:07:40.914 ] 00:07:40.914 10:36:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:07:40.914 10:36:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7686ff48-d055-41af-9472-0c68651e1312 00:07:40.914 10:36:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:41.171 10:36:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:41.171 10:36:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:41.171 10:36:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7686ff48-d055-41af-9472-0c68651e1312 00:07:41.429 10:36:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:41.429 10:36:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0754ce72-2894-4549-a3e3-eaf67f24abce 00:07:41.429 10:36:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7686ff48-d055-41af-9472-0c68651e1312 00:07:41.686 10:36:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:41.944 10:36:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:41.944 00:07:41.944 real 0m15.264s 00:07:41.944 user 0m15.151s 00:07:41.944 sys 0m1.108s 00:07:41.944 10:36:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:41.944 10:36:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:41.944 ************************************ 00:07:41.944 END TEST lvs_grow_clean 00:07:41.944 ************************************ 00:07:41.944 10:36:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:41.944 10:36:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:41.944 10:36:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:41.944 10:36:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:41.944 ************************************ 00:07:41.944 START TEST lvs_grow_dirty 00:07:41.944 ************************************ 00:07:41.944 10:36:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:07:41.944 10:36:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:41.944 10:36:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:41.944 10:36:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:41.944 10:36:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:41.944 10:36:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:41.944 10:36:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:41.944 10:36:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:41.944 10:36:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:41.944 10:36:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:42.208 10:36:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:42.209 10:36:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:42.209 10:36:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=1074fa61-2220-4221-b87a-3759283a4015 00:07:42.209 10:36:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1074fa61-2220-4221-b87a-3759283a4015 00:07:42.209 10:36:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:42.472 10:36:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:42.472 10:36:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:42.472 10:36:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1074fa61-2220-4221-b87a-3759283a4015 lvol 150 00:07:42.730 10:36:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=fbf0acbf-f09a-4e07-b9c0-6e4621b63fb7 00:07:42.730 10:36:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:42.730 10:36:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:42.730 [2024-11-07 10:36:10.397486] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:42.730 [2024-11-07 10:36:10.397549] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:42.987 true 00:07:42.987 10:36:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1074fa61-2220-4221-b87a-3759283a4015 00:07:42.987 10:36:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:42.987 10:36:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:42.987 10:36:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:43.244 10:36:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fbf0acbf-f09a-4e07-b9c0-6e4621b63fb7 00:07:43.501 10:36:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:07:43.502 [2024-11-07 10:36:11.111769] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:43.502 10:36:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:43.759 10:36:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3652545 00:07:43.759 10:36:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:43.759 10:36:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:43.759 10:36:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3652545 /var/tmp/bdevperf.sock 00:07:43.759 10:36:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 3652545 ']' 00:07:43.759 10:36:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:43.759 10:36:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:43.759 10:36:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:43.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:43.759 10:36:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:43.759 10:36:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:43.759 [2024-11-07 10:36:11.325248] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:07:43.759 [2024-11-07 10:36:11.325298] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3652545 ] 00:07:43.759 [2024-11-07 10:36:11.401490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.017 [2024-11-07 10:36:11.442587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.017 10:36:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:44.017 10:36:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:07:44.017 10:36:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:44.274 Nvme0n1 00:07:44.274 10:36:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:44.274 [ 00:07:44.274 { 00:07:44.274 "name": "Nvme0n1", 00:07:44.274 "aliases": [ 00:07:44.274 "fbf0acbf-f09a-4e07-b9c0-6e4621b63fb7" 00:07:44.274 ], 00:07:44.274 "product_name": "NVMe disk", 00:07:44.274 "block_size": 4096, 00:07:44.274 "num_blocks": 38912, 00:07:44.274 "uuid": "fbf0acbf-f09a-4e07-b9c0-6e4621b63fb7", 00:07:44.274 "numa_id": 1, 00:07:44.274 "assigned_rate_limits": { 00:07:44.274 "rw_ios_per_sec": 0, 00:07:44.274 "rw_mbytes_per_sec": 0, 00:07:44.274 "r_mbytes_per_sec": 0, 00:07:44.274 "w_mbytes_per_sec": 0 00:07:44.274 }, 00:07:44.274 "claimed": false, 00:07:44.274 "zoned": false, 00:07:44.274 "supported_io_types": { 00:07:44.274 "read": true, 00:07:44.274 "write": true, 00:07:44.274 "unmap": true, 00:07:44.274 "flush": true, 00:07:44.274 "reset": true, 00:07:44.274 "nvme_admin": true, 00:07:44.274 "nvme_io": true, 00:07:44.274 "nvme_io_md": false, 00:07:44.274 "write_zeroes": true, 00:07:44.274 "zcopy": false, 00:07:44.274 "get_zone_info": false, 00:07:44.274 "zone_management": false, 00:07:44.274 "zone_append": false, 00:07:44.274 "compare": true, 00:07:44.274 "compare_and_write": true, 00:07:44.274 "abort": true, 00:07:44.274 "seek_hole": false, 00:07:44.274 "seek_data": false, 00:07:44.274 "copy": true, 00:07:44.274 "nvme_iov_md": false 00:07:44.274 }, 00:07:44.274 "memory_domains": [ 00:07:44.274 { 00:07:44.274 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:07:44.274 "dma_device_type": 0 00:07:44.274 } 00:07:44.274 ], 00:07:44.274 "driver_specific": { 00:07:44.274 "nvme": [ 00:07:44.275 { 00:07:44.275 "trid": { 00:07:44.275 "trtype": "RDMA", 00:07:44.275 "adrfam": "IPv4", 00:07:44.275 "traddr": "192.168.100.8", 00:07:44.275 "trsvcid": "4420", 00:07:44.275 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:44.275 }, 00:07:44.275 "ctrlr_data": { 00:07:44.275 "cntlid": 1, 00:07:44.275 "vendor_id": "0x8086", 00:07:44.275 "model_number": "SPDK bdev Controller", 00:07:44.275 "serial_number": "SPDK0", 00:07:44.275 "firmware_revision": "25.01", 00:07:44.275 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:44.275 "oacs": { 00:07:44.275 "security": 0, 00:07:44.275 "format": 0, 00:07:44.275 "firmware": 0, 00:07:44.275 "ns_manage": 0 00:07:44.275 }, 00:07:44.275 "multi_ctrlr": true, 00:07:44.275 "ana_reporting": false 00:07:44.275 }, 00:07:44.275 "vs": { 00:07:44.275 "nvme_version": "1.3" 00:07:44.275 }, 00:07:44.275 "ns_data": { 00:07:44.275 "id": 1, 00:07:44.275 "can_share": true 00:07:44.275 } 00:07:44.275 } 00:07:44.275 ], 00:07:44.275 "mp_policy": "active_passive" 00:07:44.275 } 00:07:44.275 } 00:07:44.275 ] 00:07:44.533 10:36:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3652796 00:07:44.533 10:36:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:44.533 10:36:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:44.533 Running I/O for 10 seconds... 00:07:45.466 Latency(us) 00:07:45.466 [2024-11-07T09:36:13.137Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:45.466 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:45.466 Nvme0n1 : 1.00 35072.00 137.00 0.00 0.00 0.00 0.00 0.00 00:07:45.466 [2024-11-07T09:36:13.137Z] =================================================================================================================== 00:07:45.466 [2024-11-07T09:36:13.137Z] Total : 35072.00 137.00 0.00 0.00 0.00 0.00 0.00 00:07:45.466 00:07:46.399 10:36:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1074fa61-2220-4221-b87a-3759283a4015 00:07:46.399 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:46.399 Nvme0n1 : 2.00 35361.00 138.13 0.00 0.00 0.00 0.00 0.00 00:07:46.399 [2024-11-07T09:36:14.070Z] =================================================================================================================== 00:07:46.399 [2024-11-07T09:36:14.070Z] Total : 35361.00 138.13 0.00 0.00 0.00 0.00 0.00 00:07:46.399 00:07:46.656 true 00:07:46.656 10:36:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1074fa61-2220-4221-b87a-3759283a4015 00:07:46.656 10:36:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:46.656 10:36:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:46.656 10:36:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:46.656 10:36:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3652796 00:07:47.588 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:47.588 Nvme0n1 : 3.00 35446.00 138.46 0.00 0.00 0.00 0.00 0.00 00:07:47.588 [2024-11-07T09:36:15.259Z] =================================================================================================================== 00:07:47.588 [2024-11-07T09:36:15.259Z] Total : 35446.00 138.46 0.00 0.00 0.00 0.00 0.00 00:07:47.588 00:07:48.521 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:48.521 Nvme0n1 : 4.00 35575.75 138.97 0.00 0.00 0.00 0.00 0.00 00:07:48.521 [2024-11-07T09:36:16.192Z] =================================================================================================================== 00:07:48.521 [2024-11-07T09:36:16.192Z] Total : 35575.75 138.97 0.00 0.00 0.00 0.00 0.00 00:07:48.521 00:07:49.453 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:49.453 Nvme0n1 : 5.00 35654.80 139.28 0.00 0.00 0.00 0.00 0.00 00:07:49.453 [2024-11-07T09:36:17.124Z] =================================================================================================================== 00:07:49.453 [2024-11-07T09:36:17.124Z] Total : 35654.80 139.28 0.00 0.00 0.00 0.00 0.00 00:07:49.453 00:07:50.385 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:50.385 Nvme0n1 : 6.00 35712.33 139.50 0.00 0.00 0.00 0.00 0.00 00:07:50.385 [2024-11-07T09:36:18.056Z] =================================================================================================================== 00:07:50.385 [2024-11-07T09:36:18.056Z] Total : 35712.33 139.50 0.00 0.00 0.00 0.00 0.00 00:07:50.385 00:07:51.757 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:51.757 Nvme0n1 : 7.00 35766.57 139.71 0.00 0.00 0.00 0.00 0.00 00:07:51.757 [2024-11-07T09:36:19.428Z] =================================================================================================================== 00:07:51.757 [2024-11-07T09:36:19.428Z] Total : 35766.57 139.71 0.00 0.00 0.00 0.00 0.00 00:07:51.757 00:07:52.690 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:52.690 Nvme0n1 : 8.00 35795.75 139.83 0.00 0.00 0.00 0.00 0.00 00:07:52.690 [2024-11-07T09:36:20.361Z] =================================================================================================================== 00:07:52.690 [2024-11-07T09:36:20.361Z] Total : 35795.75 139.83 0.00 0.00 0.00 0.00 0.00 00:07:52.690 00:07:53.622 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:53.622 Nvme0n1 : 9.00 35722.33 139.54 0.00 0.00 0.00 0.00 0.00 00:07:53.622 [2024-11-07T09:36:21.293Z] =================================================================================================================== 00:07:53.622 [2024-11-07T09:36:21.293Z] Total : 35722.33 139.54 0.00 0.00 0.00 0.00 0.00 00:07:53.622 00:07:54.554 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:54.554 Nvme0n1 : 10.00 35756.50 139.67 0.00 0.00 0.00 0.00 0.00 00:07:54.554 [2024-11-07T09:36:22.225Z] =================================================================================================================== 00:07:54.554 [2024-11-07T09:36:22.225Z] Total : 35756.50 139.67 0.00 0.00 0.00 0.00 0.00 00:07:54.554 00:07:54.554 00:07:54.554 Latency(us) 00:07:54.554 [2024-11-07T09:36:22.225Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:54.554 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:54.554 Nvme0n1 : 10.00 35756.90 139.68 0.00 0.00 3576.83 2621.44 9175.04 00:07:54.554 [2024-11-07T09:36:22.225Z] =================================================================================================================== 00:07:54.554 [2024-11-07T09:36:22.225Z] Total : 35756.90 139.68 0.00 0.00 3576.83 2621.44 9175.04 00:07:54.554 { 00:07:54.554 "results": [ 00:07:54.554 { 00:07:54.554 "job": "Nvme0n1", 00:07:54.554 "core_mask": "0x2", 00:07:54.554 "workload": "randwrite", 00:07:54.554 "status": "finished", 00:07:54.554 "queue_depth": 128, 00:07:54.554 "io_size": 4096, 00:07:54.554 "runtime": 10.003467, 00:07:54.554 "iops": 35756.903081701574, 00:07:54.554 "mibps": 139.67540266289677, 00:07:54.554 "io_failed": 0, 00:07:54.554 "io_timeout": 0, 00:07:54.554 "avg_latency_us": 3576.834956117117, 00:07:54.554 "min_latency_us": 2621.44, 00:07:54.554 "max_latency_us": 9175.04 00:07:54.554 } 00:07:54.554 ], 00:07:54.554 "core_count": 1 00:07:54.554 } 00:07:54.554 10:36:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3652545 00:07:54.554 10:36:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 3652545 ']' 00:07:54.554 10:36:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 3652545 00:07:54.554 10:36:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:07:54.554 10:36:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:54.554 10:36:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3652545 00:07:54.554 10:36:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:54.554 10:36:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:54.554 10:36:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3652545' 00:07:54.554 killing process with pid 3652545 00:07:54.554 10:36:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 3652545 00:07:54.554 Received shutdown signal, test time was about 10.000000 seconds 00:07:54.554 00:07:54.554 Latency(us) 00:07:54.554 [2024-11-07T09:36:22.225Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:54.554 [2024-11-07T09:36:22.225Z] =================================================================================================================== 00:07:54.554 [2024-11-07T09:36:22.225Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:54.554 10:36:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 3652545 00:07:54.812 10:36:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:55.069 10:36:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:55.069 10:36:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1074fa61-2220-4221-b87a-3759283a4015 00:07:55.069 10:36:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:55.327 10:36:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:55.327 10:36:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:55.327 10:36:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3649015 00:07:55.327 10:36:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3649015 00:07:55.327 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3649015 Killed "${NVMF_APP[@]}" "$@" 00:07:55.327 10:36:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:55.327 10:36:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:55.327 10:36:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:55.327 10:36:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:55.327 10:36:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:55.327 10:36:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3654628 00:07:55.327 10:36:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3654628 00:07:55.327 10:36:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 3654628 ']' 00:07:55.327 10:36:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.327 10:36:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:55.327 10:36:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.327 10:36:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:55.327 10:36:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:55.327 10:36:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:55.327 [2024-11-07 10:36:22.983981] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:07:55.327 [2024-11-07 10:36:22.984038] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:55.585 [2024-11-07 10:36:23.060765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.585 [2024-11-07 10:36:23.098778] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:55.585 [2024-11-07 10:36:23.098817] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:55.585 [2024-11-07 10:36:23.098826] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:55.585 [2024-11-07 10:36:23.098834] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:55.585 [2024-11-07 10:36:23.098857] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:55.585 [2024-11-07 10:36:23.099455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.585 10:36:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:55.585 10:36:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:07:55.585 10:36:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:55.585 10:36:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:55.585 10:36:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:55.585 10:36:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:55.585 10:36:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:55.842 [2024-11-07 10:36:23.404196] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:55.842 [2024-11-07 10:36:23.404275] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:55.842 [2024-11-07 10:36:23.404302] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:55.842 10:36:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:55.842 10:36:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev fbf0acbf-f09a-4e07-b9c0-6e4621b63fb7 00:07:55.842 10:36:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=fbf0acbf-f09a-4e07-b9c0-6e4621b63fb7 00:07:55.842 10:36:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:55.842 10:36:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:07:55.842 10:36:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:55.842 10:36:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:55.842 10:36:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:56.100 10:36:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b fbf0acbf-f09a-4e07-b9c0-6e4621b63fb7 -t 2000 00:07:56.100 [ 00:07:56.100 { 00:07:56.100 "name": "fbf0acbf-f09a-4e07-b9c0-6e4621b63fb7", 00:07:56.100 "aliases": [ 00:07:56.100 "lvs/lvol" 00:07:56.100 ], 00:07:56.100 "product_name": "Logical Volume", 00:07:56.100 "block_size": 4096, 00:07:56.100 "num_blocks": 38912, 00:07:56.100 "uuid": "fbf0acbf-f09a-4e07-b9c0-6e4621b63fb7", 00:07:56.100 "assigned_rate_limits": { 00:07:56.100 "rw_ios_per_sec": 0, 00:07:56.100 "rw_mbytes_per_sec": 0, 00:07:56.100 "r_mbytes_per_sec": 0, 00:07:56.100 "w_mbytes_per_sec": 0 00:07:56.100 }, 00:07:56.100 "claimed": false, 00:07:56.100 "zoned": false, 00:07:56.100 "supported_io_types": { 00:07:56.100 "read": true, 00:07:56.100 "write": true, 00:07:56.100 "unmap": true, 00:07:56.100 "flush": false, 00:07:56.100 "reset": true, 00:07:56.100 "nvme_admin": false, 00:07:56.100 "nvme_io": false, 00:07:56.100 "nvme_io_md": false, 00:07:56.100 "write_zeroes": true, 00:07:56.100 "zcopy": false, 00:07:56.100 "get_zone_info": false, 00:07:56.100 "zone_management": false, 00:07:56.100 "zone_append": false, 00:07:56.100 "compare": false, 00:07:56.100 "compare_and_write": false, 00:07:56.100 "abort": false, 00:07:56.100 "seek_hole": true, 00:07:56.100 "seek_data": true, 00:07:56.100 "copy": false, 00:07:56.100 "nvme_iov_md": false 00:07:56.100 }, 00:07:56.100 "driver_specific": { 00:07:56.100 "lvol": { 00:07:56.100 "lvol_store_uuid": "1074fa61-2220-4221-b87a-3759283a4015", 00:07:56.100 "base_bdev": "aio_bdev", 00:07:56.100 "thin_provision": false, 00:07:56.100 "num_allocated_clusters": 38, 00:07:56.100 "snapshot": false, 00:07:56.100 "clone": false, 00:07:56.100 "esnap_clone": false 00:07:56.100 } 00:07:56.100 } 00:07:56.100 } 00:07:56.100 ] 00:07:56.100 10:36:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:07:56.357 10:36:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1074fa61-2220-4221-b87a-3759283a4015 00:07:56.357 10:36:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:56.357 10:36:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:56.357 10:36:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1074fa61-2220-4221-b87a-3759283a4015 00:07:56.357 10:36:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:56.614 10:36:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:56.614 10:36:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:56.871 [2024-11-07 10:36:24.316805] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:56.871 10:36:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1074fa61-2220-4221-b87a-3759283a4015 00:07:56.871 10:36:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:07:56.871 10:36:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1074fa61-2220-4221-b87a-3759283a4015 00:07:56.871 10:36:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:56.871 10:36:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:56.871 10:36:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:56.871 10:36:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:56.871 10:36:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:56.871 10:36:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:56.871 10:36:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:56.871 10:36:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:07:56.871 10:36:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1074fa61-2220-4221-b87a-3759283a4015 00:07:56.871 request: 00:07:56.871 { 00:07:56.871 "uuid": "1074fa61-2220-4221-b87a-3759283a4015", 00:07:56.871 "method": "bdev_lvol_get_lvstores", 00:07:56.871 "req_id": 1 00:07:56.871 } 00:07:56.871 Got JSON-RPC error response 00:07:56.871 response: 00:07:56.871 { 00:07:56.871 "code": -19, 00:07:56.871 "message": "No such device" 00:07:56.871 } 00:07:56.871 10:36:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:07:56.871 10:36:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:56.871 10:36:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:56.871 10:36:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:56.872 10:36:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:57.129 aio_bdev 00:07:57.129 10:36:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev fbf0acbf-f09a-4e07-b9c0-6e4621b63fb7 00:07:57.129 10:36:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=fbf0acbf-f09a-4e07-b9c0-6e4621b63fb7 00:07:57.129 10:36:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:57.129 10:36:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:07:57.129 10:36:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:57.129 10:36:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:57.129 10:36:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:57.386 10:36:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b fbf0acbf-f09a-4e07-b9c0-6e4621b63fb7 -t 2000 00:07:57.644 [ 00:07:57.644 { 00:07:57.644 "name": "fbf0acbf-f09a-4e07-b9c0-6e4621b63fb7", 00:07:57.644 "aliases": [ 00:07:57.644 "lvs/lvol" 00:07:57.644 ], 00:07:57.644 "product_name": "Logical Volume", 00:07:57.644 "block_size": 4096, 00:07:57.644 "num_blocks": 38912, 00:07:57.644 "uuid": "fbf0acbf-f09a-4e07-b9c0-6e4621b63fb7", 00:07:57.644 "assigned_rate_limits": { 00:07:57.644 "rw_ios_per_sec": 0, 00:07:57.644 "rw_mbytes_per_sec": 0, 00:07:57.644 "r_mbytes_per_sec": 0, 00:07:57.644 "w_mbytes_per_sec": 0 00:07:57.644 }, 00:07:57.644 "claimed": false, 00:07:57.644 "zoned": false, 00:07:57.644 "supported_io_types": { 00:07:57.644 "read": true, 00:07:57.644 "write": true, 00:07:57.644 "unmap": true, 00:07:57.644 "flush": false, 00:07:57.644 "reset": true, 00:07:57.644 "nvme_admin": false, 00:07:57.644 "nvme_io": false, 00:07:57.644 "nvme_io_md": false, 00:07:57.644 "write_zeroes": true, 00:07:57.644 "zcopy": false, 00:07:57.644 "get_zone_info": false, 00:07:57.644 "zone_management": false, 00:07:57.644 "zone_append": false, 00:07:57.644 "compare": false, 00:07:57.644 "compare_and_write": false, 00:07:57.644 "abort": false, 00:07:57.644 "seek_hole": true, 00:07:57.644 "seek_data": true, 00:07:57.644 "copy": false, 00:07:57.644 "nvme_iov_md": false 00:07:57.644 }, 00:07:57.644 "driver_specific": { 00:07:57.644 "lvol": { 00:07:57.644 "lvol_store_uuid": "1074fa61-2220-4221-b87a-3759283a4015", 00:07:57.644 "base_bdev": "aio_bdev", 00:07:57.644 "thin_provision": false, 00:07:57.644 "num_allocated_clusters": 38, 00:07:57.644 "snapshot": false, 00:07:57.644 "clone": false, 00:07:57.644 "esnap_clone": false 00:07:57.644 } 00:07:57.644 } 00:07:57.644 } 00:07:57.644 ] 00:07:57.644 10:36:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:07:57.644 10:36:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1074fa61-2220-4221-b87a-3759283a4015 00:07:57.644 10:36:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:57.644 10:36:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:57.644 10:36:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1074fa61-2220-4221-b87a-3759283a4015 00:07:57.644 10:36:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:57.901 10:36:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:57.901 10:36:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fbf0acbf-f09a-4e07-b9c0-6e4621b63fb7 00:07:58.158 10:36:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1074fa61-2220-4221-b87a-3759283a4015 00:07:58.416 10:36:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:58.416 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:58.416 00:07:58.416 real 0m16.613s 00:07:58.416 user 0m43.326s 00:07:58.416 sys 0m3.194s 00:07:58.416 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:58.416 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:58.416 ************************************ 00:07:58.416 END TEST lvs_grow_dirty 00:07:58.416 ************************************ 00:07:58.673 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:58.673 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:07:58.673 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:07:58.673 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:07:58.673 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:58.673 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:07:58.673 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:07:58.673 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:07:58.673 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:58.673 nvmf_trace.0 00:07:58.673 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:07:58.673 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:58.673 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:58.673 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:58.673 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:58.673 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:58.673 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:58.674 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:58.674 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:58.674 rmmod nvme_rdma 00:07:58.674 rmmod nvme_fabrics 00:07:58.674 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:58.674 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:58.674 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:58.674 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3654628 ']' 00:07:58.674 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3654628 00:07:58.674 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 3654628 ']' 00:07:58.674 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 3654628 00:07:58.674 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:07:58.674 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:58.674 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3654628 00:07:58.674 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:58.674 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:58.674 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3654628' 00:07:58.674 killing process with pid 3654628 00:07:58.674 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 3654628 00:07:58.674 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 3654628 00:07:58.932 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:58.932 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:07:58.932 00:07:58.932 real 0m39.734s 00:07:58.932 user 1m4.236s 00:07:58.932 sys 0m9.766s 00:07:58.932 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:58.932 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:58.932 ************************************ 00:07:58.932 END TEST nvmf_lvs_grow 00:07:58.932 ************************************ 00:07:58.932 10:36:26 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:07:58.932 10:36:26 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:58.932 10:36:26 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:58.932 10:36:26 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:58.932 ************************************ 00:07:58.932 START TEST nvmf_bdev_io_wait 00:07:58.932 ************************************ 00:07:58.932 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:07:58.932 * Looking for test storage... 00:07:58.932 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:58.932 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:58.932 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:07:58.932 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:59.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.191 --rc genhtml_branch_coverage=1 00:07:59.191 --rc genhtml_function_coverage=1 00:07:59.191 --rc genhtml_legend=1 00:07:59.191 --rc geninfo_all_blocks=1 00:07:59.191 --rc geninfo_unexecuted_blocks=1 00:07:59.191 00:07:59.191 ' 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:59.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.191 --rc genhtml_branch_coverage=1 00:07:59.191 --rc genhtml_function_coverage=1 00:07:59.191 --rc genhtml_legend=1 00:07:59.191 --rc geninfo_all_blocks=1 00:07:59.191 --rc geninfo_unexecuted_blocks=1 00:07:59.191 00:07:59.191 ' 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:59.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.191 --rc genhtml_branch_coverage=1 00:07:59.191 --rc genhtml_function_coverage=1 00:07:59.191 --rc genhtml_legend=1 00:07:59.191 --rc geninfo_all_blocks=1 00:07:59.191 --rc geninfo_unexecuted_blocks=1 00:07:59.191 00:07:59.191 ' 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:59.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.191 --rc genhtml_branch_coverage=1 00:07:59.191 --rc genhtml_function_coverage=1 00:07:59.191 --rc genhtml_legend=1 00:07:59.191 --rc geninfo_all_blocks=1 00:07:59.191 --rc geninfo_unexecuted_blocks=1 00:07:59.191 00:07:59.191 ' 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:59.191 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.192 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:59.192 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:59.192 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:59.192 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:59.192 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:59.192 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:59.192 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:59.192 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:59.192 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:59.192 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:59.192 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:59.192 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:59.192 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:59.192 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:59.192 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:07:59.192 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:59.192 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:59.192 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:59.192 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:59.192 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.192 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:59.192 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.192 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:59.192 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:59.192 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:59.192 10:36:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:05.861 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:05.861 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:05.861 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:05.861 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # rdma_device_init 00:08:05.861 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # uname 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:05.862 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:05.862 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:05.862 altname enp217s0f0np0 00:08:05.862 altname ens818f0np0 00:08:05.862 inet 192.168.100.8/24 scope global mlx_0_0 00:08:05.862 valid_lft forever preferred_lft forever 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:05.862 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:05.862 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:05.862 altname enp217s0f1np1 00:08:05.862 altname ens818f1np1 00:08:05.862 inet 192.168.100.9/24 scope global mlx_0_1 00:08:05.862 valid_lft forever preferred_lft forever 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:05.862 192.168.100.9' 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:05.862 192.168.100.9' 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # head -n 1 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:05.862 192.168.100.9' 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # tail -n +2 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # head -n 1 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:05.862 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:05.863 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:05.863 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:05.863 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:05.863 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:05.863 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:05.863 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:05.863 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3658505 00:08:05.863 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:05.863 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3658505 00:08:05.863 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 3658505 ']' 00:08:05.863 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.863 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:05.863 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.863 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:05.863 10:36:33 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:05.863 [2024-11-07 10:36:33.529137] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:08:05.863 [2024-11-07 10:36:33.529194] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:06.122 [2024-11-07 10:36:33.608483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:06.122 [2024-11-07 10:36:33.649780] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:06.122 [2024-11-07 10:36:33.649822] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:06.122 [2024-11-07 10:36:33.649832] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:06.122 [2024-11-07 10:36:33.649840] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:06.122 [2024-11-07 10:36:33.649847] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:06.122 [2024-11-07 10:36:33.651635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.122 [2024-11-07 10:36:33.651659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:06.122 [2024-11-07 10:36:33.651745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:06.122 [2024-11-07 10:36:33.651747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.058 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:07.058 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:08:07.058 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:07.058 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:07.058 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:07.058 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:07.058 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:07.058 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.058 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:07.058 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.058 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:07.058 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.058 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:07.058 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.058 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:07.058 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.058 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:07.058 [2024-11-07 10:36:34.515180] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x5a2e60/0x5a7350) succeed. 00:08:07.058 [2024-11-07 10:36:34.524695] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x5a44f0/0x5e89f0) succeed. 00:08:07.058 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.058 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:07.058 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.058 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:07.058 Malloc0 00:08:07.058 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.058 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:07.058 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.058 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:07.058 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.058 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:07.058 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.058 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:07.058 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.058 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:07.058 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.058 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:07.058 [2024-11-07 10:36:34.712467] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:07.058 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.058 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3658649 00:08:07.058 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:07.059 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:07.059 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3658652 00:08:07.059 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:07.059 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:07.059 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:07.059 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:07.059 { 00:08:07.059 "params": { 00:08:07.059 "name": "Nvme$subsystem", 00:08:07.059 "trtype": "$TEST_TRANSPORT", 00:08:07.059 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:07.059 "adrfam": "ipv4", 00:08:07.059 "trsvcid": "$NVMF_PORT", 00:08:07.059 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:07.059 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:07.059 "hdgst": ${hdgst:-false}, 00:08:07.059 "ddgst": ${ddgst:-false} 00:08:07.059 }, 00:08:07.059 "method": "bdev_nvme_attach_controller" 00:08:07.059 } 00:08:07.059 EOF 00:08:07.059 )") 00:08:07.059 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:07.059 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:07.059 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3658654 00:08:07.059 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:07.059 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:07.059 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:07.059 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:07.059 { 00:08:07.059 "params": { 00:08:07.059 "name": "Nvme$subsystem", 00:08:07.059 "trtype": "$TEST_TRANSPORT", 00:08:07.059 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:07.059 "adrfam": "ipv4", 00:08:07.059 "trsvcid": "$NVMF_PORT", 00:08:07.059 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:07.059 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:07.059 "hdgst": ${hdgst:-false}, 00:08:07.059 "ddgst": ${ddgst:-false} 00:08:07.059 }, 00:08:07.059 "method": "bdev_nvme_attach_controller" 00:08:07.059 } 00:08:07.059 EOF 00:08:07.059 )") 00:08:07.059 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:07.059 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:07.059 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3658657 00:08:07.059 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:07.059 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:07.059 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:07.059 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:07.059 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:07.059 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:07.059 { 00:08:07.059 "params": { 00:08:07.059 "name": "Nvme$subsystem", 00:08:07.059 "trtype": "$TEST_TRANSPORT", 00:08:07.059 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:07.059 "adrfam": "ipv4", 00:08:07.059 "trsvcid": "$NVMF_PORT", 00:08:07.059 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:07.059 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:07.059 "hdgst": ${hdgst:-false}, 00:08:07.059 "ddgst": ${ddgst:-false} 00:08:07.059 }, 00:08:07.059 "method": "bdev_nvme_attach_controller" 00:08:07.059 } 00:08:07.059 EOF 00:08:07.059 )") 00:08:07.059 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:07.059 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:07.059 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:07.059 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:07.059 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:07.059 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:07.059 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:07.059 { 00:08:07.059 "params": { 00:08:07.059 "name": "Nvme$subsystem", 00:08:07.059 "trtype": "$TEST_TRANSPORT", 00:08:07.059 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:07.059 "adrfam": "ipv4", 00:08:07.059 "trsvcid": "$NVMF_PORT", 00:08:07.059 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:07.059 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:07.059 "hdgst": ${hdgst:-false}, 00:08:07.059 "ddgst": ${ddgst:-false} 00:08:07.059 }, 00:08:07.059 "method": "bdev_nvme_attach_controller" 00:08:07.059 } 00:08:07.059 EOF 00:08:07.059 )") 00:08:07.059 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:07.319 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3658649 00:08:07.319 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:07.319 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:07.319 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:07.319 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:07.319 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:07.319 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:07.319 "params": { 00:08:07.319 "name": "Nvme1", 00:08:07.319 "trtype": "rdma", 00:08:07.319 "traddr": "192.168.100.8", 00:08:07.319 "adrfam": "ipv4", 00:08:07.319 "trsvcid": "4420", 00:08:07.319 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:07.319 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:07.319 "hdgst": false, 00:08:07.319 "ddgst": false 00:08:07.319 }, 00:08:07.319 "method": "bdev_nvme_attach_controller" 00:08:07.319 }' 00:08:07.319 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:07.319 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:07.319 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:07.319 "params": { 00:08:07.319 "name": "Nvme1", 00:08:07.319 "trtype": "rdma", 00:08:07.319 "traddr": "192.168.100.8", 00:08:07.319 "adrfam": "ipv4", 00:08:07.319 "trsvcid": "4420", 00:08:07.319 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:07.319 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:07.319 "hdgst": false, 00:08:07.319 "ddgst": false 00:08:07.319 }, 00:08:07.319 "method": "bdev_nvme_attach_controller" 00:08:07.319 }' 00:08:07.319 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:07.319 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:07.319 "params": { 00:08:07.319 "name": "Nvme1", 00:08:07.319 "trtype": "rdma", 00:08:07.319 "traddr": "192.168.100.8", 00:08:07.319 "adrfam": "ipv4", 00:08:07.319 "trsvcid": "4420", 00:08:07.319 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:07.319 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:07.319 "hdgst": false, 00:08:07.319 "ddgst": false 00:08:07.319 }, 00:08:07.319 "method": "bdev_nvme_attach_controller" 00:08:07.319 }' 00:08:07.319 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:07.319 10:36:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:07.319 "params": { 00:08:07.319 "name": "Nvme1", 00:08:07.319 "trtype": "rdma", 00:08:07.319 "traddr": "192.168.100.8", 00:08:07.319 "adrfam": "ipv4", 00:08:07.319 "trsvcid": "4420", 00:08:07.319 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:07.319 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:07.319 "hdgst": false, 00:08:07.319 "ddgst": false 00:08:07.319 }, 00:08:07.319 "method": "bdev_nvme_attach_controller" 00:08:07.319 }' 00:08:07.319 [2024-11-07 10:36:34.764602] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:08:07.319 [2024-11-07 10:36:34.764656] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:07.319 [2024-11-07 10:36:34.768420] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:08:07.319 [2024-11-07 10:36:34.768469] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:07.319 [2024-11-07 10:36:34.768727] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:08:07.319 [2024-11-07 10:36:34.768771] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:07.319 [2024-11-07 10:36:34.769230] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:08:07.319 [2024-11-07 10:36:34.769273] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:07.319 [2024-11-07 10:36:34.962374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.578 [2024-11-07 10:36:35.016353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:07.578 [2024-11-07 10:36:35.017187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.578 [2024-11-07 10:36:35.057978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:07.578 [2024-11-07 10:36:35.071959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.578 [2024-11-07 10:36:35.108456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:07.578 [2024-11-07 10:36:35.173677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.578 [2024-11-07 10:36:35.226887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:07.837 Running I/O for 1 seconds... 00:08:07.837 Running I/O for 1 seconds... 00:08:07.837 Running I/O for 1 seconds... 00:08:07.837 Running I/O for 1 seconds... 00:08:08.773 20928.00 IOPS, 81.75 MiB/s 00:08:08.773 Latency(us) 00:08:08.773 [2024-11-07T09:36:36.444Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:08.773 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:08.773 Nvme1n1 : 1.01 20968.95 81.91 0.00 0.00 6088.25 3617.59 13841.20 00:08:08.773 [2024-11-07T09:36:36.444Z] =================================================================================================================== 00:08:08.773 [2024-11-07T09:36:36.444Z] Total : 20968.95 81.91 0.00 0.00 6088.25 3617.59 13841.20 00:08:08.773 14682.00 IOPS, 57.35 MiB/s [2024-11-07T09:36:36.444Z] 15526.00 IOPS, 60.65 MiB/s 00:08:08.773 Latency(us) 00:08:08.773 [2024-11-07T09:36:36.444Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:08.773 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:08.773 Nvme1n1 : 1.01 14733.89 57.55 0.00 0.00 8659.88 4771.02 18245.22 00:08:08.773 [2024-11-07T09:36:36.444Z] =================================================================================================================== 00:08:08.773 [2024-11-07T09:36:36.444Z] Total : 14733.89 57.55 0.00 0.00 8659.88 4771.02 18245.22 00:08:08.773 00:08:08.773 Latency(us) 00:08:08.773 [2024-11-07T09:36:36.444Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:08.773 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:08.773 Nvme1n1 : 1.01 15580.26 60.86 0.00 0.00 8191.27 4587.52 16567.50 00:08:08.773 [2024-11-07T09:36:36.444Z] =================================================================================================================== 00:08:08.773 [2024-11-07T09:36:36.444Z] Total : 15580.26 60.86 0.00 0.00 8191.27 4587.52 16567.50 00:08:08.773 264152.00 IOPS, 1031.84 MiB/s 00:08:08.773 Latency(us) 00:08:08.773 [2024-11-07T09:36:36.444Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:08.773 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:08.773 Nvme1n1 : 1.00 263762.51 1030.32 0.00 0.00 483.09 212.99 2057.83 00:08:08.773 [2024-11-07T09:36:36.444Z] =================================================================================================================== 00:08:08.773 [2024-11-07T09:36:36.444Z] Total : 263762.51 1030.32 0.00 0.00 483.09 212.99 2057.83 00:08:09.032 10:36:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3658652 00:08:09.032 10:36:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3658654 00:08:09.032 10:36:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3658657 00:08:09.032 10:36:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:09.033 10:36:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.033 10:36:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:09.033 10:36:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.033 10:36:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:09.033 10:36:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:09.033 10:36:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:09.033 10:36:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:09.033 10:36:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:09.033 10:36:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:09.033 10:36:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:09.033 10:36:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:09.033 10:36:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:09.033 rmmod nvme_rdma 00:08:09.033 rmmod nvme_fabrics 00:08:09.033 10:36:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:09.033 10:36:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:09.033 10:36:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:09.033 10:36:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3658505 ']' 00:08:09.033 10:36:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3658505 00:08:09.033 10:36:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 3658505 ']' 00:08:09.033 10:36:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 3658505 00:08:09.033 10:36:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:08:09.033 10:36:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:09.033 10:36:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3658505 00:08:09.033 10:36:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:09.033 10:36:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:09.033 10:36:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3658505' 00:08:09.033 killing process with pid 3658505 00:08:09.033 10:36:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 3658505 00:08:09.033 10:36:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 3658505 00:08:09.292 10:36:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:09.292 10:36:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:09.292 00:08:09.292 real 0m10.399s 00:08:09.292 user 0m20.176s 00:08:09.292 sys 0m6.474s 00:08:09.292 10:36:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:09.292 10:36:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:09.292 ************************************ 00:08:09.292 END TEST nvmf_bdev_io_wait 00:08:09.292 ************************************ 00:08:09.292 10:36:36 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:08:09.292 10:36:36 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:09.292 10:36:36 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:09.292 10:36:36 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:09.292 ************************************ 00:08:09.292 START TEST nvmf_queue_depth 00:08:09.292 ************************************ 00:08:09.292 10:36:36 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:08:09.552 * Looking for test storage... 00:08:09.552 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:09.552 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:09.552 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:08:09.552 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:09.552 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:09.552 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:09.552 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:09.552 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:09.552 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:09.552 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:09.552 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:09.552 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:09.552 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:09.552 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:09.552 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:09.552 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:09.552 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:09.552 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:09.552 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:09.552 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:09.552 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:09.552 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:09.552 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:09.552 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:09.552 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:09.552 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:09.552 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:09.552 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:09.552 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:09.552 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:09.552 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:09.552 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:09.552 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:09.552 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:09.552 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:09.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.552 --rc genhtml_branch_coverage=1 00:08:09.552 --rc genhtml_function_coverage=1 00:08:09.552 --rc genhtml_legend=1 00:08:09.552 --rc geninfo_all_blocks=1 00:08:09.552 --rc geninfo_unexecuted_blocks=1 00:08:09.552 00:08:09.552 ' 00:08:09.552 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:09.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.552 --rc genhtml_branch_coverage=1 00:08:09.552 --rc genhtml_function_coverage=1 00:08:09.552 --rc genhtml_legend=1 00:08:09.552 --rc geninfo_all_blocks=1 00:08:09.552 --rc geninfo_unexecuted_blocks=1 00:08:09.552 00:08:09.552 ' 00:08:09.552 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:09.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.552 --rc genhtml_branch_coverage=1 00:08:09.552 --rc genhtml_function_coverage=1 00:08:09.552 --rc genhtml_legend=1 00:08:09.552 --rc geninfo_all_blocks=1 00:08:09.552 --rc geninfo_unexecuted_blocks=1 00:08:09.552 00:08:09.552 ' 00:08:09.552 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:09.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.552 --rc genhtml_branch_coverage=1 00:08:09.552 --rc genhtml_function_coverage=1 00:08:09.552 --rc genhtml_legend=1 00:08:09.552 --rc geninfo_all_blocks=1 00:08:09.552 --rc geninfo_unexecuted_blocks=1 00:08:09.552 00:08:09.552 ' 00:08:09.552 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:09.552 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:09.552 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:09.552 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:09.552 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:09.552 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:09.553 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:09.553 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:09.553 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:09.553 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:09.553 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:09.553 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:09.553 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:09.553 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:09.553 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:09.553 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:09.553 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:09.553 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:09.553 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:09.553 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:09.553 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:09.553 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:09.553 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:09.553 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.553 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.553 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.553 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:09.553 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.553 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:09.553 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:09.553 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:09.553 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:09.553 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:09.553 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:09.553 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:09.553 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:09.553 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:09.553 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:09.553 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:09.553 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:09.553 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:09.553 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:09.553 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:09.553 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:09.553 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:09.553 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:09.553 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:09.553 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:09.553 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.553 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:09.553 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.553 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:09.553 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:09.553 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:09.553 10:36:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:16.119 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:16.119 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:16.119 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:16.119 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:16.119 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:16.119 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:16.119 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:16.119 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:16.119 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:16.119 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:16.119 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:16.119 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:16.120 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:16.120 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:16.120 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:16.120 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # rdma_device_init 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # uname 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:16.120 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:16.121 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:16.121 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:16.121 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:16.121 altname enp217s0f0np0 00:08:16.121 altname ens818f0np0 00:08:16.121 inet 192.168.100.8/24 scope global mlx_0_0 00:08:16.121 valid_lft forever preferred_lft forever 00:08:16.121 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:16.121 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:16.121 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:16.121 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:16.121 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:16.121 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:16.121 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:16.121 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:16.121 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:16.121 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:16.121 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:16.121 altname enp217s0f1np1 00:08:16.121 altname ens818f1np1 00:08:16.121 inet 192.168.100.9/24 scope global mlx_0_1 00:08:16.121 valid_lft forever preferred_lft forever 00:08:16.121 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:16.121 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:16.121 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:16.121 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:16.121 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:16.121 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:16.121 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:16.121 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:16.121 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:16.121 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:16.381 192.168.100.9' 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:16.381 192.168.100.9' 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # head -n 1 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:16.381 192.168.100.9' 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # tail -n +2 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # head -n 1 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3662336 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3662336 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 3662336 ']' 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:16.381 10:36:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:16.381 [2024-11-07 10:36:43.924386] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:08:16.381 [2024-11-07 10:36:43.924434] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:16.381 [2024-11-07 10:36:44.001931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.381 [2024-11-07 10:36:44.038444] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:16.381 [2024-11-07 10:36:44.038480] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:16.381 [2024-11-07 10:36:44.038489] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:16.381 [2024-11-07 10:36:44.038498] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:16.381 [2024-11-07 10:36:44.038527] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:16.381 [2024-11-07 10:36:44.039139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.641 10:36:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:16.641 10:36:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:08:16.641 10:36:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:16.641 10:36:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:16.641 10:36:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:16.641 10:36:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:16.641 10:36:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:16.641 10:36:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.641 10:36:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:16.641 [2024-11-07 10:36:44.203515] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x19f3ea0/0x19f8390) succeed. 00:08:16.641 [2024-11-07 10:36:44.212424] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x19f5350/0x1a39a30) succeed. 00:08:16.641 10:36:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.641 10:36:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:16.641 10:36:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.641 10:36:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:16.641 Malloc0 00:08:16.641 10:36:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.641 10:36:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:16.641 10:36:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.642 10:36:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:16.642 10:36:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.642 10:36:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:16.642 10:36:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.642 10:36:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:16.642 10:36:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.642 10:36:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:16.642 10:36:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.642 10:36:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:16.642 [2024-11-07 10:36:44.299540] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:16.642 10:36:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.642 10:36:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3662489 00:08:16.642 10:36:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:16.642 10:36:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:16.642 10:36:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3662489 /var/tmp/bdevperf.sock 00:08:16.642 10:36:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 3662489 ']' 00:08:16.642 10:36:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:16.642 10:36:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:16.642 10:36:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:16.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:16.642 10:36:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:16.642 10:36:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:16.901 [2024-11-07 10:36:44.351101] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:08:16.901 [2024-11-07 10:36:44.351148] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3662489 ] 00:08:16.901 [2024-11-07 10:36:44.427009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.901 [2024-11-07 10:36:44.467557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.901 10:36:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:16.901 10:36:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:08:16.901 10:36:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:16.901 10:36:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.901 10:36:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:17.161 NVMe0n1 00:08:17.161 10:36:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.161 10:36:44 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:17.161 Running I/O for 10 seconds... 00:08:19.477 17408.00 IOPS, 68.00 MiB/s [2024-11-07T09:36:48.086Z] 17747.50 IOPS, 69.33 MiB/s [2024-11-07T09:36:49.023Z] 17781.67 IOPS, 69.46 MiB/s [2024-11-07T09:36:49.961Z] 17920.00 IOPS, 70.00 MiB/s [2024-11-07T09:36:50.897Z] 17930.40 IOPS, 70.04 MiB/s [2024-11-07T09:36:51.835Z] 17920.00 IOPS, 70.00 MiB/s [2024-11-07T09:36:52.771Z] 17934.57 IOPS, 70.06 MiB/s [2024-11-07T09:36:54.150Z] 17937.00 IOPS, 70.07 MiB/s [2024-11-07T09:36:55.087Z] 17976.89 IOPS, 70.22 MiB/s [2024-11-07T09:36:55.087Z] 17977.10 IOPS, 70.22 MiB/s 00:08:27.416 Latency(us) 00:08:27.416 [2024-11-07T09:36:55.087Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:27.416 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:27.416 Verification LBA range: start 0x0 length 0x4000 00:08:27.416 NVMe0n1 : 10.04 18005.30 70.33 0.00 0.00 56703.44 12478.05 37539.02 00:08:27.416 [2024-11-07T09:36:55.087Z] =================================================================================================================== 00:08:27.416 [2024-11-07T09:36:55.087Z] Total : 18005.30 70.33 0.00 0.00 56703.44 12478.05 37539.02 00:08:27.416 { 00:08:27.416 "results": [ 00:08:27.416 { 00:08:27.416 "job": "NVMe0n1", 00:08:27.416 "core_mask": "0x1", 00:08:27.416 "workload": "verify", 00:08:27.416 "status": "finished", 00:08:27.416 "verify_range": { 00:08:27.416 "start": 0, 00:08:27.416 "length": 16384 00:08:27.416 }, 00:08:27.416 "queue_depth": 1024, 00:08:27.416 "io_size": 4096, 00:08:27.416 "runtime": 10.039988, 00:08:27.416 "iops": 18005.30040474152, 00:08:27.416 "mibps": 70.33320470602156, 00:08:27.416 "io_failed": 0, 00:08:27.416 "io_timeout": 0, 00:08:27.416 "avg_latency_us": 56703.43601914003, 00:08:27.416 "min_latency_us": 12478.0544, 00:08:27.416 "max_latency_us": 37539.0208 00:08:27.416 } 00:08:27.416 ], 00:08:27.416 "core_count": 1 00:08:27.416 } 00:08:27.416 10:36:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3662489 00:08:27.416 10:36:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 3662489 ']' 00:08:27.416 10:36:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 3662489 00:08:27.416 10:36:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:08:27.416 10:36:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:27.416 10:36:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3662489 00:08:27.416 10:36:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:27.416 10:36:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:27.417 10:36:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3662489' 00:08:27.417 killing process with pid 3662489 00:08:27.417 10:36:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 3662489 00:08:27.417 Received shutdown signal, test time was about 10.000000 seconds 00:08:27.417 00:08:27.417 Latency(us) 00:08:27.417 [2024-11-07T09:36:55.088Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:27.417 [2024-11-07T09:36:55.088Z] =================================================================================================================== 00:08:27.417 [2024-11-07T09:36:55.088Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:27.417 10:36:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 3662489 00:08:27.417 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:27.417 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:27.417 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:27.417 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:27.417 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:27.417 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:27.417 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:27.417 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:27.417 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:27.417 rmmod nvme_rdma 00:08:27.676 rmmod nvme_fabrics 00:08:27.676 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:27.676 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:27.676 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:27.676 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3662336 ']' 00:08:27.676 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3662336 00:08:27.676 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 3662336 ']' 00:08:27.676 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 3662336 00:08:27.676 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:08:27.676 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:27.676 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3662336 00:08:27.676 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:27.676 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:27.676 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3662336' 00:08:27.676 killing process with pid 3662336 00:08:27.676 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 3662336 00:08:27.676 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 3662336 00:08:27.936 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:27.936 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:27.936 00:08:27.936 real 0m18.485s 00:08:27.936 user 0m24.244s 00:08:27.936 sys 0m5.805s 00:08:27.936 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:27.936 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:27.936 ************************************ 00:08:27.936 END TEST nvmf_queue_depth 00:08:27.936 ************************************ 00:08:27.936 10:36:55 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:08:27.936 10:36:55 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:27.936 10:36:55 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:27.936 10:36:55 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:27.936 ************************************ 00:08:27.936 START TEST nvmf_target_multipath 00:08:27.936 ************************************ 00:08:27.936 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:08:27.936 * Looking for test storage... 00:08:27.936 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:27.936 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:27.936 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:08:27.936 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:28.196 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:28.196 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:28.196 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:28.196 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:28.196 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:28.196 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:28.196 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:28.196 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:28.196 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:28.196 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:28.196 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:28.196 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:28.196 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:28.196 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:28.196 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:28.196 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:28.196 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:28.196 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:28.196 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:28.196 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:28.196 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:28.196 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:28.196 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:28.196 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:28.196 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:28.196 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:28.196 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:28.196 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:28.196 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:28.196 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:28.196 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:28.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.196 --rc genhtml_branch_coverage=1 00:08:28.196 --rc genhtml_function_coverage=1 00:08:28.196 --rc genhtml_legend=1 00:08:28.196 --rc geninfo_all_blocks=1 00:08:28.196 --rc geninfo_unexecuted_blocks=1 00:08:28.196 00:08:28.196 ' 00:08:28.196 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:28.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.196 --rc genhtml_branch_coverage=1 00:08:28.196 --rc genhtml_function_coverage=1 00:08:28.196 --rc genhtml_legend=1 00:08:28.196 --rc geninfo_all_blocks=1 00:08:28.196 --rc geninfo_unexecuted_blocks=1 00:08:28.196 00:08:28.196 ' 00:08:28.196 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:28.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.196 --rc genhtml_branch_coverage=1 00:08:28.196 --rc genhtml_function_coverage=1 00:08:28.196 --rc genhtml_legend=1 00:08:28.196 --rc geninfo_all_blocks=1 00:08:28.196 --rc geninfo_unexecuted_blocks=1 00:08:28.196 00:08:28.196 ' 00:08:28.196 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:28.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.196 --rc genhtml_branch_coverage=1 00:08:28.196 --rc genhtml_function_coverage=1 00:08:28.196 --rc genhtml_legend=1 00:08:28.196 --rc geninfo_all_blocks=1 00:08:28.196 --rc geninfo_unexecuted_blocks=1 00:08:28.196 00:08:28.196 ' 00:08:28.196 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:28.196 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:28.196 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:28.196 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:28.196 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:28.196 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:28.196 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:28.196 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:28.196 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:28.196 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:28.196 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:28.196 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:28.196 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:28.196 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:28.197 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:28.197 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:28.197 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:28.197 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:28.197 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:28.197 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:28.197 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:28.197 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:28.197 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:28.197 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.197 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.197 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.197 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:28.197 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.197 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:28.197 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:28.197 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:28.197 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:28.197 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:28.197 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:28.197 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:28.197 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:28.197 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:28.197 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:28.197 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:28.197 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:28.197 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:28.197 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:28.197 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:28.197 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:28.197 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:28.197 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:28.197 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:28.197 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:28.197 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:28.197 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.197 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:28.197 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:28.197 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:28.197 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:28.197 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:28.197 10:36:55 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:34.768 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:34.768 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:34.768 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:34.769 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:34.769 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:34.769 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:34.769 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:34.769 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:34.769 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:34.769 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:34.769 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:34.769 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:34.769 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:34.769 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:34.769 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:34.769 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:34.769 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:34.769 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:34.769 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:34.769 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:34.769 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:34.769 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # rdma_device_init 00:08:34.769 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:34.769 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # uname 00:08:34.769 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:34.769 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:34.769 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:34.769 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:34.769 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:34.769 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:34.769 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:34.769 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:34.769 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:34.769 10:37:01 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:34.769 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:34.769 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:34.769 altname enp217s0f0np0 00:08:34.769 altname ens818f0np0 00:08:34.769 inet 192.168.100.8/24 scope global mlx_0_0 00:08:34.769 valid_lft forever preferred_lft forever 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:34.769 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:34.769 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:34.769 altname enp217s0f1np1 00:08:34.769 altname ens818f1np1 00:08:34.769 inet 192.168.100.9/24 scope global mlx_0_1 00:08:34.769 valid_lft forever preferred_lft forever 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:34.769 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:34.770 192.168.100.9' 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:34.770 192.168.100.9' 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # head -n 1 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:34.770 192.168.100.9' 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # tail -n +2 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # head -n 1 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:08:34.770 run this test only with TCP transport for now 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:34.770 rmmod nvme_rdma 00:08:34.770 rmmod nvme_fabrics 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:34.770 00:08:34.770 real 0m6.803s 00:08:34.770 user 0m1.876s 00:08:34.770 sys 0m4.990s 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:34.770 ************************************ 00:08:34.770 END TEST nvmf_target_multipath 00:08:34.770 ************************************ 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:34.770 ************************************ 00:08:34.770 START TEST nvmf_zcopy 00:08:34.770 ************************************ 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:08:34.770 * Looking for test storage... 00:08:34.770 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:08:34.770 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:35.030 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:35.030 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:35.030 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:35.030 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:35.030 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:35.030 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:35.030 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:35.030 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:35.030 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:35.030 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:35.030 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:35.030 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:35.030 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:35.030 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:35.030 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:35.030 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:35.030 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:35.030 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:35.030 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:35.030 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:35.030 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:35.030 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:35.030 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:35.030 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:35.030 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:35.030 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:35.030 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:35.030 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:35.030 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:35.030 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:35.030 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:35.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.031 --rc genhtml_branch_coverage=1 00:08:35.031 --rc genhtml_function_coverage=1 00:08:35.031 --rc genhtml_legend=1 00:08:35.031 --rc geninfo_all_blocks=1 00:08:35.031 --rc geninfo_unexecuted_blocks=1 00:08:35.031 00:08:35.031 ' 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:35.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.031 --rc genhtml_branch_coverage=1 00:08:35.031 --rc genhtml_function_coverage=1 00:08:35.031 --rc genhtml_legend=1 00:08:35.031 --rc geninfo_all_blocks=1 00:08:35.031 --rc geninfo_unexecuted_blocks=1 00:08:35.031 00:08:35.031 ' 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:35.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.031 --rc genhtml_branch_coverage=1 00:08:35.031 --rc genhtml_function_coverage=1 00:08:35.031 --rc genhtml_legend=1 00:08:35.031 --rc geninfo_all_blocks=1 00:08:35.031 --rc geninfo_unexecuted_blocks=1 00:08:35.031 00:08:35.031 ' 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:35.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.031 --rc genhtml_branch_coverage=1 00:08:35.031 --rc genhtml_function_coverage=1 00:08:35.031 --rc genhtml_legend=1 00:08:35.031 --rc geninfo_all_blocks=1 00:08:35.031 --rc geninfo_unexecuted_blocks=1 00:08:35.031 00:08:35.031 ' 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:35.031 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:35.031 10:37:02 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.600 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:41.600 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:41.600 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:41.600 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:41.600 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:41.600 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:41.600 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:41.600 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:41.600 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:41.600 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:41.600 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:41.600 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:41.600 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:41.600 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:41.600 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:41.600 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:41.600 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:41.600 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:41.600 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:41.600 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:41.600 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:41.600 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:41.600 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:41.600 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:41.600 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:41.600 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:41.600 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:41.600 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:41.600 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:41.600 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:41.600 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:41.600 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:41.600 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:41.600 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:41.600 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:41.600 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:41.600 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:41.600 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:41.600 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:41.600 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:41.600 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:41.600 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:41.600 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:41.600 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:41.600 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:41.600 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:41.600 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:41.600 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:41.600 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:41.600 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:41.601 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:41.601 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # rdma_device_init 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # uname 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:41.601 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:41.601 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:41.601 altname enp217s0f0np0 00:08:41.601 altname ens818f0np0 00:08:41.601 inet 192.168.100.8/24 scope global mlx_0_0 00:08:41.601 valid_lft forever preferred_lft forever 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:41.601 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:41.601 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:41.601 altname enp217s0f1np1 00:08:41.601 altname ens818f1np1 00:08:41.601 inet 192.168.100.9/24 scope global mlx_0_1 00:08:41.601 valid_lft forever preferred_lft forever 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:41.601 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:41.602 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:41.602 192.168.100.9' 00:08:41.861 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:41.861 192.168.100.9' 00:08:41.861 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # head -n 1 00:08:41.861 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:41.861 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:41.861 192.168.100.9' 00:08:41.861 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # head -n 1 00:08:41.861 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # tail -n +2 00:08:41.861 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:41.861 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:41.861 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:41.861 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:41.861 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:41.861 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:41.861 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:41.861 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:41.861 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:41.861 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.861 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3670955 00:08:41.861 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:41.861 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3670955 00:08:41.861 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 3670955 ']' 00:08:41.861 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.861 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:41.861 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.861 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:41.861 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.861 [2024-11-07 10:37:09.370871] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:08:41.861 [2024-11-07 10:37:09.370921] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:41.861 [2024-11-07 10:37:09.445122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.861 [2024-11-07 10:37:09.481497] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:41.861 [2024-11-07 10:37:09.481537] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:41.861 [2024-11-07 10:37:09.481547] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:41.861 [2024-11-07 10:37:09.481555] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:41.861 [2024-11-07 10:37:09.481562] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:41.861 [2024-11-07 10:37:09.482148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.126 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:42.126 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:08:42.126 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:42.126 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:42.126 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:42.126 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:42.126 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:08:42.126 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:08:42.126 Unsupported transport: rdma 00:08:42.126 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0 00:08:42.126 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0 00:08:42.126 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@810 -- # type=--id 00:08:42.126 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@811 -- # id=0 00:08:42.126 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:08:42.126 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:42.126 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:08:42.127 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:08:42.127 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@822 -- # for n in $shm_files 00:08:42.127 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:42.127 nvmf_trace.0 00:08:42.127 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@825 -- # return 0 00:08:42.127 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini 00:08:42.127 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:42.127 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:42.127 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:42.127 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:42.127 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:42.127 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:42.127 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:42.127 rmmod nvme_rdma 00:08:42.127 rmmod nvme_fabrics 00:08:42.127 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:42.127 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:42.127 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:42.127 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3670955 ']' 00:08:42.127 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3670955 00:08:42.127 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 3670955 ']' 00:08:42.127 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 3670955 00:08:42.127 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:08:42.127 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:42.127 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3670955 00:08:42.127 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:42.127 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:42.127 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3670955' 00:08:42.127 killing process with pid 3670955 00:08:42.127 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 3670955 00:08:42.127 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 3670955 00:08:42.387 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:42.387 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:42.387 00:08:42.387 real 0m7.628s 00:08:42.387 user 0m2.618s 00:08:42.387 sys 0m5.595s 00:08:42.387 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:42.387 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:42.387 ************************************ 00:08:42.387 END TEST nvmf_zcopy 00:08:42.387 ************************************ 00:08:42.387 10:37:09 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:08:42.387 10:37:09 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:42.387 10:37:09 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:42.387 10:37:09 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:42.387 ************************************ 00:08:42.387 START TEST nvmf_nmic 00:08:42.387 ************************************ 00:08:42.387 10:37:09 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:08:42.646 * Looking for test storage... 00:08:42.646 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:42.646 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:42.646 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:42.646 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:08:42.646 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:42.646 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:42.646 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:42.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.647 --rc genhtml_branch_coverage=1 00:08:42.647 --rc genhtml_function_coverage=1 00:08:42.647 --rc genhtml_legend=1 00:08:42.647 --rc geninfo_all_blocks=1 00:08:42.647 --rc geninfo_unexecuted_blocks=1 00:08:42.647 00:08:42.647 ' 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:42.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.647 --rc genhtml_branch_coverage=1 00:08:42.647 --rc genhtml_function_coverage=1 00:08:42.647 --rc genhtml_legend=1 00:08:42.647 --rc geninfo_all_blocks=1 00:08:42.647 --rc geninfo_unexecuted_blocks=1 00:08:42.647 00:08:42.647 ' 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:42.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.647 --rc genhtml_branch_coverage=1 00:08:42.647 --rc genhtml_function_coverage=1 00:08:42.647 --rc genhtml_legend=1 00:08:42.647 --rc geninfo_all_blocks=1 00:08:42.647 --rc geninfo_unexecuted_blocks=1 00:08:42.647 00:08:42.647 ' 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:42.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.647 --rc genhtml_branch_coverage=1 00:08:42.647 --rc genhtml_function_coverage=1 00:08:42.647 --rc genhtml_legend=1 00:08:42.647 --rc geninfo_all_blocks=1 00:08:42.647 --rc geninfo_unexecuted_blocks=1 00:08:42.647 00:08:42.647 ' 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:42.647 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.647 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:42.648 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.648 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:42.648 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:42.648 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:08:42.648 10:37:10 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:49.294 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:49.294 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:49.294 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:49.294 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # rdma_device_init 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # uname 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:49.294 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:49.294 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:49.294 altname enp217s0f0np0 00:08:49.294 altname ens818f0np0 00:08:49.294 inet 192.168.100.8/24 scope global mlx_0_0 00:08:49.294 valid_lft forever preferred_lft forever 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:49.294 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:49.294 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:49.294 altname enp217s0f1np1 00:08:49.294 altname ens818f1np1 00:08:49.294 inet 192.168.100.9/24 scope global mlx_0_1 00:08:49.294 valid_lft forever preferred_lft forever 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:49.294 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:49.295 192.168.100.9' 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:49.295 192.168.100.9' 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # head -n 1 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:49.295 192.168.100.9' 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # tail -n +2 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # head -n 1 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3674209 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3674209 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 3674209 ']' 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.295 [2024-11-07 10:37:16.701821] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:08:49.295 [2024-11-07 10:37:16.701878] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:49.295 [2024-11-07 10:37:16.780966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:49.295 [2024-11-07 10:37:16.823964] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:49.295 [2024-11-07 10:37:16.824009] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:49.295 [2024-11-07 10:37:16.824018] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:49.295 [2024-11-07 10:37:16.824027] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:49.295 [2024-11-07 10:37:16.824034] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:49.295 [2024-11-07 10:37:16.825640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.295 [2024-11-07 10:37:16.825737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:49.295 [2024-11-07 10:37:16.825829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:49.295 [2024-11-07 10:37:16.825831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:49.295 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.555 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:49.555 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:49.555 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.555 10:37:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.555 [2024-11-07 10:37:17.004396] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x15fadf0/0x15ff2e0) succeed. 00:08:49.555 [2024-11-07 10:37:17.013798] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x15fc480/0x1640980) succeed. 00:08:49.555 10:37:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.555 10:37:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:49.555 10:37:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.555 10:37:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.555 Malloc0 00:08:49.555 10:37:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.555 10:37:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:49.555 10:37:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.555 10:37:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.555 10:37:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.555 10:37:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:49.555 10:37:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.555 10:37:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.555 10:37:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.555 10:37:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:49.555 10:37:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.555 10:37:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.555 [2024-11-07 10:37:17.198714] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:49.555 10:37:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.555 10:37:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:49.555 test case1: single bdev can't be used in multiple subsystems 00:08:49.555 10:37:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:49.555 10:37:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.555 10:37:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.555 10:37:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.555 10:37:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:08:49.555 10:37:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.555 10:37:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.555 10:37:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.556 10:37:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:49.556 10:37:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:49.556 10:37:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.556 10:37:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.815 [2024-11-07 10:37:17.226491] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:49.815 [2024-11-07 10:37:17.226518] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:49.815 [2024-11-07 10:37:17.226528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.815 request: 00:08:49.815 { 00:08:49.815 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:49.815 "namespace": { 00:08:49.815 "bdev_name": "Malloc0", 00:08:49.815 "no_auto_visible": false 00:08:49.815 }, 00:08:49.815 "method": "nvmf_subsystem_add_ns", 00:08:49.815 "req_id": 1 00:08:49.815 } 00:08:49.815 Got JSON-RPC error response 00:08:49.815 response: 00:08:49.815 { 00:08:49.815 "code": -32602, 00:08:49.815 "message": "Invalid parameters" 00:08:49.815 } 00:08:49.815 10:37:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:49.815 10:37:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:49.815 10:37:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:49.815 10:37:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:49.815 Adding namespace failed - expected result. 00:08:49.815 10:37:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:49.815 test case2: host connect to nvmf target in multiple paths 00:08:49.815 10:37:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:08:49.815 10:37:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.815 10:37:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.815 [2024-11-07 10:37:17.242555] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:08:49.815 10:37:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.815 10:37:17 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:50.754 10:37:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:08:51.691 10:37:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:51.691 10:37:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:08:51.691 10:37:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:08:51.691 10:37:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:08:51.691 10:37:19 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:08:53.595 10:37:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:08:53.870 10:37:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:08:53.870 10:37:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:08:53.870 10:37:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:08:53.870 10:37:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:08:53.870 10:37:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:08:53.870 10:37:21 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:53.870 [global] 00:08:53.870 thread=1 00:08:53.870 invalidate=1 00:08:53.870 rw=write 00:08:53.870 time_based=1 00:08:53.870 runtime=1 00:08:53.870 ioengine=libaio 00:08:53.870 direct=1 00:08:53.870 bs=4096 00:08:53.870 iodepth=1 00:08:53.870 norandommap=0 00:08:53.870 numjobs=1 00:08:53.870 00:08:53.870 verify_dump=1 00:08:53.870 verify_backlog=512 00:08:53.870 verify_state_save=0 00:08:53.870 do_verify=1 00:08:53.870 verify=crc32c-intel 00:08:53.870 [job0] 00:08:53.870 filename=/dev/nvme0n1 00:08:53.870 Could not set queue depth (nvme0n1) 00:08:54.131 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:54.131 fio-3.35 00:08:54.131 Starting 1 thread 00:08:55.509 00:08:55.509 job0: (groupid=0, jobs=1): err= 0: pid=3675267: Thu Nov 7 10:37:22 2024 00:08:55.509 read: IOPS=7051, BW=27.5MiB/s (28.9MB/s)(27.6MiB/1001msec) 00:08:55.509 slat (nsec): min=8196, max=33100, avg=8722.99, stdev=904.33 00:08:55.509 clat (nsec): min=39352, max=88753, avg=58692.80, stdev=3460.64 00:08:55.509 lat (nsec): min=58847, max=98480, avg=67415.79, stdev=3513.98 00:08:55.509 clat percentiles (nsec): 00:08:55.509 | 1.00th=[51968], 5.00th=[53504], 10.00th=[54528], 20.00th=[55552], 00:08:55.509 | 30.00th=[56576], 40.00th=[57600], 50.00th=[58624], 60.00th=[59136], 00:08:55.509 | 70.00th=[60160], 80.00th=[61696], 90.00th=[63232], 95.00th=[64768], 00:08:55.509 | 99.00th=[68096], 99.50th=[69120], 99.90th=[75264], 99.95th=[77312], 00:08:55.509 | 99.99th=[88576] 00:08:55.509 write: IOPS=7160, BW=28.0MiB/s (29.3MB/s)(28.0MiB/1001msec); 0 zone resets 00:08:55.509 slat (nsec): min=10739, max=49548, avg=11408.29, stdev=1193.85 00:08:55.509 clat (nsec): min=31717, max=98597, avg=56603.62, stdev=3641.39 00:08:55.509 lat (usec): min=58, max=140, avg=68.01, stdev= 3.84 00:08:55.509 clat percentiles (nsec): 00:08:55.509 | 1.00th=[49920], 5.00th=[51456], 10.00th=[52480], 20.00th=[53504], 00:08:55.509 | 30.00th=[54528], 40.00th=[55552], 50.00th=[56576], 60.00th=[57088], 00:08:55.509 | 70.00th=[58112], 80.00th=[59648], 90.00th=[61184], 95.00th=[62720], 00:08:55.509 | 99.00th=[66048], 99.50th=[67072], 99.90th=[78336], 99.95th=[90624], 00:08:55.509 | 99.99th=[98816] 00:08:55.509 bw ( KiB/s): min=28672, max=28672, per=100.00%, avg=28672.00, stdev= 0.00, samples=1 00:08:55.509 iops : min= 7168, max= 7168, avg=7168.00, stdev= 0.00, samples=1 00:08:55.509 lat (usec) : 50=0.77%, 100=99.23% 00:08:55.509 cpu : usr=13.20%, sys=16.50%, ctx=14227, majf=0, minf=1 00:08:55.509 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:55.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:55.509 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:55.509 issued rwts: total=7059,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:55.509 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:55.509 00:08:55.509 Run status group 0 (all jobs): 00:08:55.509 READ: bw=27.5MiB/s (28.9MB/s), 27.5MiB/s-27.5MiB/s (28.9MB/s-28.9MB/s), io=27.6MiB (28.9MB), run=1001-1001msec 00:08:55.509 WRITE: bw=28.0MiB/s (29.3MB/s), 28.0MiB/s-28.0MiB/s (29.3MB/s-29.3MB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:08:55.509 00:08:55.509 Disk stats (read/write): 00:08:55.509 nvme0n1: ios=6193/6650, merge=0/0, ticks=303/313, in_queue=616, util=90.58% 00:08:55.509 10:37:22 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:57.418 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:57.418 10:37:24 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:57.418 10:37:24 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:08:57.418 10:37:24 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:08:57.418 10:37:24 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:57.418 10:37:24 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:08:57.418 10:37:24 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:57.418 10:37:24 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:08:57.418 10:37:24 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:57.418 10:37:24 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:57.418 10:37:24 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:57.418 10:37:24 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:08:57.418 10:37:24 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:57.418 10:37:24 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:57.418 10:37:24 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:08:57.418 10:37:24 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:57.418 10:37:24 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:57.418 rmmod nvme_rdma 00:08:57.418 rmmod nvme_fabrics 00:08:57.418 10:37:24 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:57.418 10:37:24 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:08:57.418 10:37:24 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:08:57.418 10:37:24 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3674209 ']' 00:08:57.418 10:37:24 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3674209 00:08:57.418 10:37:24 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 3674209 ']' 00:08:57.418 10:37:24 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 3674209 00:08:57.418 10:37:24 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:08:57.418 10:37:24 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:57.418 10:37:24 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3674209 00:08:57.418 10:37:24 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:57.418 10:37:24 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:57.418 10:37:24 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3674209' 00:08:57.418 killing process with pid 3674209 00:08:57.418 10:37:24 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 3674209 00:08:57.418 10:37:24 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 3674209 00:08:57.678 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:57.678 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:57.678 00:08:57.678 real 0m15.123s 00:08:57.678 user 0m43.797s 00:08:57.678 sys 0m5.842s 00:08:57.678 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:57.678 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:57.678 ************************************ 00:08:57.678 END TEST nvmf_nmic 00:08:57.678 ************************************ 00:08:57.678 10:37:25 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:08:57.678 10:37:25 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:57.678 10:37:25 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:57.678 10:37:25 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:57.678 ************************************ 00:08:57.678 START TEST nvmf_fio_target 00:08:57.678 ************************************ 00:08:57.678 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:08:57.678 * Looking for test storage... 00:08:57.678 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:57.678 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:57.678 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:08:57.678 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:57.678 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:57.678 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:57.678 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:57.678 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:57.678 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:08:57.678 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:08:57.678 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:08:57.678 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:08:57.678 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:08:57.678 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:08:57.678 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:08:57.678 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:57.678 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:08:57.678 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:08:57.678 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:57.678 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:57.678 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:08:57.678 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:08:57.678 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:57.678 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:08:57.678 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:08:57.678 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:08:57.679 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:08:57.679 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:57.679 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:08:57.679 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:08:57.679 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:57.679 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:57.679 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:08:57.679 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:57.679 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:57.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.679 --rc genhtml_branch_coverage=1 00:08:57.679 --rc genhtml_function_coverage=1 00:08:57.679 --rc genhtml_legend=1 00:08:57.679 --rc geninfo_all_blocks=1 00:08:57.679 --rc geninfo_unexecuted_blocks=1 00:08:57.679 00:08:57.679 ' 00:08:57.679 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:57.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.679 --rc genhtml_branch_coverage=1 00:08:57.679 --rc genhtml_function_coverage=1 00:08:57.679 --rc genhtml_legend=1 00:08:57.679 --rc geninfo_all_blocks=1 00:08:57.679 --rc geninfo_unexecuted_blocks=1 00:08:57.679 00:08:57.679 ' 00:08:57.679 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:57.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.679 --rc genhtml_branch_coverage=1 00:08:57.679 --rc genhtml_function_coverage=1 00:08:57.679 --rc genhtml_legend=1 00:08:57.679 --rc geninfo_all_blocks=1 00:08:57.679 --rc geninfo_unexecuted_blocks=1 00:08:57.679 00:08:57.679 ' 00:08:57.679 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:57.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.679 --rc genhtml_branch_coverage=1 00:08:57.679 --rc genhtml_function_coverage=1 00:08:57.679 --rc genhtml_legend=1 00:08:57.679 --rc geninfo_all_blocks=1 00:08:57.679 --rc geninfo_unexecuted_blocks=1 00:08:57.679 00:08:57.679 ' 00:08:57.679 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:57.679 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:57.679 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:57.679 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:57.679 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:57.679 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:57.679 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:57.679 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:57.679 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:57.679 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:57.679 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:57.679 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:57.679 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:57.679 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:57.679 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:57.679 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:57.679 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:57.679 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:57.679 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:57.679 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:08:57.939 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:57.939 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:57.939 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:57.939 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.939 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.939 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.939 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:57.939 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.939 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:08:57.939 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:57.939 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:57.939 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:57.939 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:57.939 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:57.939 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:57.939 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:57.939 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:57.939 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:57.939 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:57.939 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:57.939 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:57.939 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:57.939 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:57.939 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:57.939 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:57.939 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:57.939 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:57.939 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:57.939 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.939 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:57.939 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:57.939 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:57.939 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:57.939 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:08:57.939 10:37:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:04.517 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:04.517 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:04.517 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:04.517 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:04.517 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:04.517 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:04.517 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:04.517 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:04.517 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:04.517 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:04.517 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:04.517 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:04.517 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:04.517 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:04.517 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:04.517 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:04.517 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:04.517 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:04.517 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:04.517 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:04.517 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:04.517 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:04.517 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:04.517 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:04.517 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:04.517 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:04.517 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:04.517 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:04.517 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:04.517 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:04.517 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:04.517 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:04.517 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:04.517 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:04.517 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:04.517 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:04.517 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:04.517 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:04.518 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:04.518 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:04.518 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:04.518 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:04.518 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:04.518 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:04.518 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:04.518 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:04.518 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:04.518 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:04.518 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:04.518 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:04.518 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:04.518 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:04.518 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:04.518 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:04.518 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:04.518 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:04.518 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:04.518 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:04.518 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:04.518 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:04.518 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:04.518 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:04.518 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:04.518 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:04.518 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:04.518 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:04.518 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:04.518 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:04.518 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:04.518 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:04.518 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:04.518 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:04.518 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:04.518 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:04.518 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:04.518 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # rdma_device_init 00:09:04.518 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:04.518 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # uname 00:09:04.518 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:04.518 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:04.518 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:04.518 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:04.518 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:04.518 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:04.778 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:04.778 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:04.778 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:04.778 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:04.778 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:04.778 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:04.778 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:04.778 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:04.778 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:04.778 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:04.778 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:04.778 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:04.778 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:04.778 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:04.778 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:04.779 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:04.779 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:04.779 altname enp217s0f0np0 00:09:04.779 altname ens818f0np0 00:09:04.779 inet 192.168.100.8/24 scope global mlx_0_0 00:09:04.779 valid_lft forever preferred_lft forever 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:04.779 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:04.779 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:04.779 altname enp217s0f1np1 00:09:04.779 altname ens818f1np1 00:09:04.779 inet 192.168.100.9/24 scope global mlx_0_1 00:09:04.779 valid_lft forever preferred_lft forever 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:04.779 192.168.100.9' 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:04.779 192.168.100.9' 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # head -n 1 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:04.779 192.168.100.9' 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # head -n 1 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # tail -n +2 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3679136 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3679136 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 3679136 ']' 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:04.779 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:05.039 [2024-11-07 10:37:32.455763] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:09:05.039 [2024-11-07 10:37:32.455815] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:05.039 [2024-11-07 10:37:32.533347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:05.039 [2024-11-07 10:37:32.574036] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:05.039 [2024-11-07 10:37:32.574079] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:05.039 [2024-11-07 10:37:32.574090] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:05.039 [2024-11-07 10:37:32.574098] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:05.039 [2024-11-07 10:37:32.574105] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:05.039 [2024-11-07 10:37:32.575701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.039 [2024-11-07 10:37:32.575799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:05.039 [2024-11-07 10:37:32.575904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:05.039 [2024-11-07 10:37:32.575907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.039 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:05.039 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:09:05.039 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:05.039 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:05.039 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:05.299 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:05.299 10:37:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:05.299 [2024-11-07 10:37:32.915741] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x20c9df0/0x20ce2e0) succeed. 00:09:05.299 [2024-11-07 10:37:32.924832] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x20cb480/0x210f980) succeed. 00:09:05.558 10:37:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:05.817 10:37:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:05.817 10:37:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:06.077 10:37:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:06.077 10:37:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:06.077 10:37:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:06.077 10:37:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:06.335 10:37:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:06.335 10:37:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:06.593 10:37:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:06.852 10:37:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:06.852 10:37:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:07.111 10:37:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:07.111 10:37:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:07.111 10:37:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:07.111 10:37:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:07.371 10:37:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:07.629 10:37:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:07.629 10:37:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:07.888 10:37:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:07.889 10:37:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:08.148 10:37:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:08.148 [2024-11-07 10:37:35.740253] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:08.148 10:37:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:08.407 10:37:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:08.667 10:37:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:09.603 10:37:37 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:09.603 10:37:37 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:09:09.603 10:37:37 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:09:09.603 10:37:37 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:09:09.603 10:37:37 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:09:09.603 10:37:37 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:09:11.507 10:37:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:09:11.507 10:37:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:09:11.507 10:37:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:09:11.507 10:37:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:09:11.507 10:37:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:09:11.507 10:37:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:09:11.507 10:37:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:11.766 [global] 00:09:11.766 thread=1 00:09:11.766 invalidate=1 00:09:11.766 rw=write 00:09:11.766 time_based=1 00:09:11.766 runtime=1 00:09:11.766 ioengine=libaio 00:09:11.766 direct=1 00:09:11.766 bs=4096 00:09:11.766 iodepth=1 00:09:11.766 norandommap=0 00:09:11.766 numjobs=1 00:09:11.766 00:09:11.766 verify_dump=1 00:09:11.766 verify_backlog=512 00:09:11.766 verify_state_save=0 00:09:11.766 do_verify=1 00:09:11.766 verify=crc32c-intel 00:09:11.766 [job0] 00:09:11.766 filename=/dev/nvme0n1 00:09:11.766 [job1] 00:09:11.766 filename=/dev/nvme0n2 00:09:11.766 [job2] 00:09:11.766 filename=/dev/nvme0n3 00:09:11.766 [job3] 00:09:11.766 filename=/dev/nvme0n4 00:09:11.766 Could not set queue depth (nvme0n1) 00:09:11.766 Could not set queue depth (nvme0n2) 00:09:11.766 Could not set queue depth (nvme0n3) 00:09:11.766 Could not set queue depth (nvme0n4) 00:09:12.025 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:12.025 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:12.025 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:12.025 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:12.025 fio-3.35 00:09:12.025 Starting 4 threads 00:09:13.402 00:09:13.402 job0: (groupid=0, jobs=1): err= 0: pid=3680523: Thu Nov 7 10:37:40 2024 00:09:13.402 read: IOPS=4096, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1000msec) 00:09:13.402 slat (nsec): min=7954, max=29468, avg=9521.14, stdev=2462.16 00:09:13.402 clat (usec): min=64, max=302, avg=102.67, stdev=28.28 00:09:13.402 lat (usec): min=73, max=312, avg=112.19, stdev=29.44 00:09:13.402 clat percentiles (usec): 00:09:13.402 | 1.00th=[ 71], 5.00th=[ 74], 10.00th=[ 76], 20.00th=[ 78], 00:09:13.402 | 30.00th=[ 80], 40.00th=[ 83], 50.00th=[ 88], 60.00th=[ 115], 00:09:13.402 | 70.00th=[ 123], 80.00th=[ 128], 90.00th=[ 137], 95.00th=[ 155], 00:09:13.402 | 99.00th=[ 182], 99.50th=[ 188], 99.90th=[ 202], 99.95th=[ 210], 00:09:13.402 | 99.99th=[ 302] 00:09:13.402 write: IOPS=4428, BW=17.3MiB/s (18.1MB/s)(17.3MiB/1000msec); 0 zone resets 00:09:13.402 slat (nsec): min=10049, max=45378, avg=12343.06, stdev=3292.03 00:09:13.402 clat (usec): min=50, max=253, avg=104.82, stdev=28.55 00:09:13.402 lat (usec): min=73, max=265, avg=117.17, stdev=29.88 00:09:13.402 clat percentiles (usec): 00:09:13.402 | 1.00th=[ 68], 5.00th=[ 71], 10.00th=[ 73], 20.00th=[ 75], 00:09:13.402 | 30.00th=[ 78], 40.00th=[ 85], 50.00th=[ 113], 60.00th=[ 120], 00:09:13.403 | 70.00th=[ 124], 80.00th=[ 128], 90.00th=[ 137], 95.00th=[ 151], 00:09:13.403 | 99.00th=[ 178], 99.50th=[ 184], 99.90th=[ 196], 99.95th=[ 200], 00:09:13.403 | 99.99th=[ 253] 00:09:13.403 bw ( KiB/s): min=16384, max=16384, per=22.87%, avg=16384.00, stdev= 0.00, samples=1 00:09:13.403 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:09:13.403 lat (usec) : 100=48.83%, 250=51.15%, 500=0.02% 00:09:13.403 cpu : usr=7.70%, sys=10.20%, ctx=8524, majf=0, minf=1 00:09:13.403 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:13.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.403 issued rwts: total=4096,4428,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.403 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:13.403 job1: (groupid=0, jobs=1): err= 0: pid=3680524: Thu Nov 7 10:37:40 2024 00:09:13.403 read: IOPS=4894, BW=19.1MiB/s (20.0MB/s)(19.1MiB/1001msec) 00:09:13.403 slat (nsec): min=8064, max=30779, avg=8859.74, stdev=772.98 00:09:13.403 clat (usec): min=65, max=200, avg=89.29, stdev=24.09 00:09:13.403 lat (usec): min=73, max=214, avg=98.15, stdev=24.17 00:09:13.403 clat percentiles (usec): 00:09:13.403 | 1.00th=[ 70], 5.00th=[ 72], 10.00th=[ 74], 20.00th=[ 76], 00:09:13.403 | 30.00th=[ 77], 40.00th=[ 79], 50.00th=[ 80], 60.00th=[ 82], 00:09:13.403 | 70.00th=[ 85], 80.00th=[ 91], 90.00th=[ 133], 95.00th=[ 145], 00:09:13.403 | 99.00th=[ 172], 99.50th=[ 178], 99.90th=[ 190], 99.95th=[ 192], 00:09:13.403 | 99.99th=[ 200] 00:09:13.403 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:09:13.403 slat (nsec): min=10373, max=37909, avg=11407.64, stdev=1126.93 00:09:13.403 clat (usec): min=50, max=193, avg=85.01, stdev=22.42 00:09:13.403 lat (usec): min=73, max=204, avg=96.42, stdev=22.47 00:09:13.403 clat percentiles (usec): 00:09:13.403 | 1.00th=[ 66], 5.00th=[ 69], 10.00th=[ 70], 20.00th=[ 72], 00:09:13.403 | 30.00th=[ 74], 40.00th=[ 75], 50.00th=[ 77], 60.00th=[ 79], 00:09:13.403 | 70.00th=[ 81], 80.00th=[ 89], 90.00th=[ 125], 95.00th=[ 137], 00:09:13.403 | 99.00th=[ 163], 99.50th=[ 169], 99.90th=[ 178], 99.95th=[ 186], 00:09:13.403 | 99.99th=[ 194] 00:09:13.403 bw ( KiB/s): min=19240, max=19240, per=26.85%, avg=19240.00, stdev= 0.00, samples=1 00:09:13.403 iops : min= 4810, max= 4810, avg=4810.00, stdev= 0.00, samples=1 00:09:13.403 lat (usec) : 100=82.05%, 250=17.95% 00:09:13.403 cpu : usr=9.20%, sys=12.00%, ctx=10019, majf=0, minf=1 00:09:13.403 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:13.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.403 issued rwts: total=4899,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.403 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:13.403 job2: (groupid=0, jobs=1): err= 0: pid=3680525: Thu Nov 7 10:37:40 2024 00:09:13.403 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:09:13.403 slat (nsec): min=8376, max=44769, avg=9695.87, stdev=2480.59 00:09:13.403 clat (usec): min=72, max=199, avg=108.74, stdev=23.51 00:09:13.403 lat (usec): min=81, max=207, avg=118.44, stdev=24.11 00:09:13.403 clat percentiles (usec): 00:09:13.403 | 1.00th=[ 80], 5.00th=[ 83], 10.00th=[ 85], 20.00th=[ 88], 00:09:13.403 | 30.00th=[ 90], 40.00th=[ 93], 50.00th=[ 99], 60.00th=[ 117], 00:09:13.403 | 70.00th=[ 125], 80.00th=[ 133], 90.00th=[ 141], 95.00th=[ 147], 00:09:13.403 | 99.00th=[ 172], 99.50th=[ 178], 99.90th=[ 192], 99.95th=[ 196], 00:09:13.403 | 99.99th=[ 200] 00:09:13.403 write: IOPS=4257, BW=16.6MiB/s (17.4MB/s)(16.6MiB/1001msec); 0 zone resets 00:09:13.403 slat (nsec): min=10571, max=64518, avg=12409.07, stdev=2905.78 00:09:13.403 clat (usec): min=69, max=193, avg=103.24, stdev=22.33 00:09:13.403 lat (usec): min=81, max=206, avg=115.65, stdev=23.03 00:09:13.403 clat percentiles (usec): 00:09:13.403 | 1.00th=[ 75], 5.00th=[ 79], 10.00th=[ 80], 20.00th=[ 83], 00:09:13.403 | 30.00th=[ 86], 40.00th=[ 89], 50.00th=[ 94], 60.00th=[ 112], 00:09:13.403 | 70.00th=[ 119], 80.00th=[ 125], 90.00th=[ 135], 95.00th=[ 141], 00:09:13.403 | 99.00th=[ 161], 99.50th=[ 167], 99.90th=[ 180], 99.95th=[ 182], 00:09:13.403 | 99.99th=[ 194] 00:09:13.403 bw ( KiB/s): min=17112, max=17112, per=23.88%, avg=17112.00, stdev= 0.00, samples=1 00:09:13.403 iops : min= 4278, max= 4278, avg=4278.00, stdev= 0.00, samples=1 00:09:13.403 lat (usec) : 100=52.72%, 250=47.28% 00:09:13.403 cpu : usr=6.50%, sys=11.40%, ctx=8359, majf=0, minf=1 00:09:13.403 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:13.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.403 issued rwts: total=4096,4262,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.403 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:13.403 job3: (groupid=0, jobs=1): err= 0: pid=3680526: Thu Nov 7 10:37:40 2024 00:09:13.403 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:09:13.403 slat (nsec): min=8339, max=31441, avg=9317.70, stdev=1542.81 00:09:13.403 clat (usec): min=72, max=194, avg=109.30, stdev=20.68 00:09:13.403 lat (usec): min=81, max=203, avg=118.62, stdev=20.87 00:09:13.403 clat percentiles (usec): 00:09:13.403 | 1.00th=[ 78], 5.00th=[ 82], 10.00th=[ 84], 20.00th=[ 87], 00:09:13.403 | 30.00th=[ 91], 40.00th=[ 98], 50.00th=[ 115], 60.00th=[ 121], 00:09:13.403 | 70.00th=[ 124], 80.00th=[ 128], 90.00th=[ 135], 95.00th=[ 139], 00:09:13.403 | 99.00th=[ 157], 99.50th=[ 165], 99.90th=[ 178], 99.95th=[ 180], 00:09:13.403 | 99.99th=[ 196] 00:09:13.403 write: IOPS=4114, BW=16.1MiB/s (16.9MB/s)(16.1MiB/1001msec); 0 zone resets 00:09:13.403 slat (nsec): min=10643, max=41790, avg=11853.12, stdev=1714.68 00:09:13.403 clat (usec): min=68, max=179, avg=107.84, stdev=21.94 00:09:13.403 lat (usec): min=80, max=190, avg=119.70, stdev=22.03 00:09:13.403 clat percentiles (usec): 00:09:13.403 | 1.00th=[ 75], 5.00th=[ 77], 10.00th=[ 79], 20.00th=[ 83], 00:09:13.403 | 30.00th=[ 87], 40.00th=[ 104], 50.00th=[ 117], 60.00th=[ 121], 00:09:13.403 | 70.00th=[ 125], 80.00th=[ 128], 90.00th=[ 133], 95.00th=[ 137], 00:09:13.403 | 99.00th=[ 153], 99.50th=[ 161], 99.90th=[ 174], 99.95th=[ 174], 00:09:13.403 | 99.99th=[ 180] 00:09:13.403 bw ( KiB/s): min=19520, max=19520, per=27.25%, avg=19520.00, stdev= 0.00, samples=1 00:09:13.403 iops : min= 4882, max= 4882, avg=4882.00, stdev= 0.00, samples=1 00:09:13.403 lat (usec) : 100=40.02%, 250=59.98% 00:09:13.403 cpu : usr=6.40%, sys=10.70%, ctx=8216, majf=0, minf=1 00:09:13.403 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:13.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.403 issued rwts: total=4096,4119,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.403 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:13.403 00:09:13.403 Run status group 0 (all jobs): 00:09:13.403 READ: bw=67.1MiB/s (70.3MB/s), 16.0MiB/s-19.1MiB/s (16.8MB/s-20.0MB/s), io=67.1MiB (70.4MB), run=1000-1001msec 00:09:13.403 WRITE: bw=70.0MiB/s (73.4MB/s), 16.1MiB/s-20.0MiB/s (16.9MB/s-20.9MB/s), io=70.0MiB (73.4MB), run=1000-1001msec 00:09:13.403 00:09:13.403 Disk stats (read/write): 00:09:13.403 nvme0n1: ios=3355/3584, merge=0/0, ticks=350/349, in_queue=699, util=84.47% 00:09:13.403 nvme0n2: ios=4089/4096, merge=0/0, ticks=349/319, in_queue=668, util=85.41% 00:09:13.403 nvme0n3: ios=3584/3593, merge=0/0, ticks=350/322, in_queue=672, util=88.48% 00:09:13.403 nvme0n4: ios=3345/3584, merge=0/0, ticks=339/335, in_queue=674, util=89.52% 00:09:13.403 10:37:40 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:13.403 [global] 00:09:13.403 thread=1 00:09:13.403 invalidate=1 00:09:13.403 rw=randwrite 00:09:13.403 time_based=1 00:09:13.403 runtime=1 00:09:13.403 ioengine=libaio 00:09:13.403 direct=1 00:09:13.403 bs=4096 00:09:13.403 iodepth=1 00:09:13.403 norandommap=0 00:09:13.403 numjobs=1 00:09:13.403 00:09:13.403 verify_dump=1 00:09:13.403 verify_backlog=512 00:09:13.403 verify_state_save=0 00:09:13.403 do_verify=1 00:09:13.403 verify=crc32c-intel 00:09:13.403 [job0] 00:09:13.403 filename=/dev/nvme0n1 00:09:13.403 [job1] 00:09:13.403 filename=/dev/nvme0n2 00:09:13.403 [job2] 00:09:13.403 filename=/dev/nvme0n3 00:09:13.403 [job3] 00:09:13.403 filename=/dev/nvme0n4 00:09:13.403 Could not set queue depth (nvme0n1) 00:09:13.403 Could not set queue depth (nvme0n2) 00:09:13.403 Could not set queue depth (nvme0n3) 00:09:13.403 Could not set queue depth (nvme0n4) 00:09:13.661 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:13.661 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:13.661 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:13.661 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:13.661 fio-3.35 00:09:13.661 Starting 4 threads 00:09:15.040 00:09:15.040 job0: (groupid=0, jobs=1): err= 0: pid=3680943: Thu Nov 7 10:37:42 2024 00:09:15.040 read: IOPS=4862, BW=19.0MiB/s (19.9MB/s)(19.0MiB/1001msec) 00:09:15.040 slat (nsec): min=8224, max=22174, avg=9092.83, stdev=1518.83 00:09:15.040 clat (usec): min=65, max=152, avg=90.44, stdev=15.18 00:09:15.040 lat (usec): min=74, max=161, avg=99.54, stdev=15.39 00:09:15.040 clat percentiles (usec): 00:09:15.040 | 1.00th=[ 72], 5.00th=[ 74], 10.00th=[ 76], 20.00th=[ 78], 00:09:15.040 | 30.00th=[ 80], 40.00th=[ 82], 50.00th=[ 84], 60.00th=[ 89], 00:09:15.040 | 70.00th=[ 102], 80.00th=[ 109], 90.00th=[ 114], 95.00th=[ 118], 00:09:15.040 | 99.00th=[ 124], 99.50th=[ 126], 99.90th=[ 141], 99.95th=[ 145], 00:09:15.040 | 99.99th=[ 153] 00:09:15.040 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:09:15.040 slat (nsec): min=6161, max=51246, avg=11847.53, stdev=2853.78 00:09:15.040 clat (usec): min=52, max=139, avg=83.72, stdev=12.78 00:09:15.040 lat (usec): min=61, max=151, avg=95.57, stdev=13.73 00:09:15.040 clat percentiles (usec): 00:09:15.040 | 1.00th=[ 68], 5.00th=[ 70], 10.00th=[ 72], 20.00th=[ 74], 00:09:15.040 | 30.00th=[ 76], 40.00th=[ 77], 50.00th=[ 80], 60.00th=[ 83], 00:09:15.040 | 70.00th=[ 90], 80.00th=[ 97], 90.00th=[ 104], 95.00th=[ 109], 00:09:15.040 | 99.00th=[ 116], 99.50th=[ 118], 99.90th=[ 125], 99.95th=[ 133], 00:09:15.040 | 99.99th=[ 141] 00:09:15.040 bw ( KiB/s): min=23696, max=23696, per=34.06%, avg=23696.00, stdev= 0.00, samples=1 00:09:15.040 iops : min= 5924, max= 5924, avg=5924.00, stdev= 0.00, samples=1 00:09:15.040 lat (usec) : 100=76.29%, 250=23.71% 00:09:15.040 cpu : usr=7.00%, sys=13.30%, ctx=9989, majf=0, minf=1 00:09:15.040 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:15.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.040 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.040 issued rwts: total=4867,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:15.040 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:15.040 job1: (groupid=0, jobs=1): err= 0: pid=3680944: Thu Nov 7 10:37:42 2024 00:09:15.040 read: IOPS=4862, BW=19.0MiB/s (19.9MB/s)(19.0MiB/1001msec) 00:09:15.040 slat (nsec): min=8256, max=29464, avg=8906.88, stdev=787.83 00:09:15.040 clat (usec): min=66, max=169, avg=90.60, stdev=15.48 00:09:15.040 lat (usec): min=75, max=178, avg=99.51, stdev=15.55 00:09:15.040 clat percentiles (usec): 00:09:15.040 | 1.00th=[ 71], 5.00th=[ 74], 10.00th=[ 76], 20.00th=[ 78], 00:09:15.040 | 30.00th=[ 80], 40.00th=[ 82], 50.00th=[ 84], 60.00th=[ 89], 00:09:15.040 | 70.00th=[ 103], 80.00th=[ 109], 90.00th=[ 115], 95.00th=[ 118], 00:09:15.040 | 99.00th=[ 124], 99.50th=[ 127], 99.90th=[ 135], 99.95th=[ 147], 00:09:15.040 | 99.99th=[ 169] 00:09:15.040 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:09:15.040 slat (nsec): min=8587, max=39367, avg=11128.54, stdev=1269.10 00:09:15.040 clat (usec): min=54, max=141, avg=84.68, stdev=13.90 00:09:15.040 lat (usec): min=74, max=151, avg=95.81, stdev=13.90 00:09:15.040 clat percentiles (usec): 00:09:15.040 | 1.00th=[ 68], 5.00th=[ 70], 10.00th=[ 72], 20.00th=[ 74], 00:09:15.040 | 30.00th=[ 76], 40.00th=[ 78], 50.00th=[ 80], 60.00th=[ 83], 00:09:15.040 | 70.00th=[ 92], 80.00th=[ 100], 90.00th=[ 108], 95.00th=[ 112], 00:09:15.040 | 99.00th=[ 119], 99.50th=[ 122], 99.90th=[ 129], 99.95th=[ 133], 00:09:15.040 | 99.99th=[ 141] 00:09:15.040 bw ( KiB/s): min=23608, max=23608, per=33.94%, avg=23608.00, stdev= 0.00, samples=1 00:09:15.040 iops : min= 5902, max= 5902, avg=5902.00, stdev= 0.00, samples=1 00:09:15.040 lat (usec) : 100=74.06%, 250=25.94% 00:09:15.040 cpu : usr=7.70%, sys=13.10%, ctx=9987, majf=0, minf=1 00:09:15.040 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:15.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.040 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.040 issued rwts: total=4867,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:15.040 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:15.040 job2: (groupid=0, jobs=1): err= 0: pid=3680945: Thu Nov 7 10:37:42 2024 00:09:15.040 read: IOPS=3125, BW=12.2MiB/s (12.8MB/s)(12.2MiB/1001msec) 00:09:15.040 slat (nsec): min=8468, max=23533, avg=9167.10, stdev=777.48 00:09:15.040 clat (usec): min=74, max=209, avg=141.56, stdev=23.73 00:09:15.040 lat (usec): min=83, max=218, avg=150.73, stdev=23.74 00:09:15.040 clat percentiles (usec): 00:09:15.040 | 1.00th=[ 90], 5.00th=[ 98], 10.00th=[ 103], 20.00th=[ 130], 00:09:15.040 | 30.00th=[ 137], 40.00th=[ 139], 50.00th=[ 143], 60.00th=[ 145], 00:09:15.040 | 70.00th=[ 149], 80.00th=[ 155], 90.00th=[ 178], 95.00th=[ 186], 00:09:15.040 | 99.00th=[ 196], 99.50th=[ 198], 99.90th=[ 204], 99.95th=[ 208], 00:09:15.040 | 99.99th=[ 210] 00:09:15.040 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:09:15.040 slat (nsec): min=10154, max=40598, avg=11150.60, stdev=1130.25 00:09:15.040 clat (usec): min=69, max=334, avg=132.16, stdev=22.49 00:09:15.040 lat (usec): min=81, max=345, avg=143.31, stdev=22.51 00:09:15.040 clat percentiles (usec): 00:09:15.040 | 1.00th=[ 84], 5.00th=[ 91], 10.00th=[ 96], 20.00th=[ 122], 00:09:15.040 | 30.00th=[ 127], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 135], 00:09:15.040 | 70.00th=[ 139], 80.00th=[ 145], 90.00th=[ 165], 95.00th=[ 174], 00:09:15.040 | 99.00th=[ 184], 99.50th=[ 188], 99.90th=[ 194], 99.95th=[ 198], 00:09:15.040 | 99.99th=[ 334] 00:09:15.040 bw ( KiB/s): min=14304, max=14304, per=20.56%, avg=14304.00, stdev= 0.00, samples=1 00:09:15.040 iops : min= 3576, max= 3576, avg=3576.00, stdev= 0.00, samples=1 00:09:15.040 lat (usec) : 100=10.10%, 250=89.89%, 500=0.01% 00:09:15.040 cpu : usr=4.80%, sys=9.30%, ctx=6713, majf=0, minf=1 00:09:15.040 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:15.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.040 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.040 issued rwts: total=3129,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:15.040 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:15.040 job3: (groupid=0, jobs=1): err= 0: pid=3680946: Thu Nov 7 10:37:42 2024 00:09:15.040 read: IOPS=3124, BW=12.2MiB/s (12.8MB/s)(12.2MiB/1001msec) 00:09:15.040 slat (nsec): min=8480, max=39352, avg=9131.50, stdev=1068.52 00:09:15.040 clat (usec): min=67, max=214, avg=141.64, stdev=23.89 00:09:15.040 lat (usec): min=83, max=223, avg=150.77, stdev=23.88 00:09:15.040 clat percentiles (usec): 00:09:15.040 | 1.00th=[ 90], 5.00th=[ 98], 10.00th=[ 103], 20.00th=[ 130], 00:09:15.040 | 30.00th=[ 137], 40.00th=[ 139], 50.00th=[ 143], 60.00th=[ 145], 00:09:15.041 | 70.00th=[ 149], 80.00th=[ 155], 90.00th=[ 178], 95.00th=[ 188], 00:09:15.041 | 99.00th=[ 198], 99.50th=[ 200], 99.90th=[ 210], 99.95th=[ 212], 00:09:15.041 | 99.99th=[ 215] 00:09:15.041 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:09:15.041 slat (nsec): min=10172, max=39018, avg=11100.72, stdev=1216.16 00:09:15.041 clat (usec): min=71, max=341, avg=132.25, stdev=24.04 00:09:15.041 lat (usec): min=85, max=353, avg=143.35, stdev=24.08 00:09:15.041 clat percentiles (usec): 00:09:15.041 | 1.00th=[ 84], 5.00th=[ 90], 10.00th=[ 95], 20.00th=[ 121], 00:09:15.041 | 30.00th=[ 126], 40.00th=[ 129], 50.00th=[ 133], 60.00th=[ 135], 00:09:15.041 | 70.00th=[ 139], 80.00th=[ 147], 90.00th=[ 169], 95.00th=[ 176], 00:09:15.041 | 99.00th=[ 186], 99.50th=[ 190], 99.90th=[ 196], 99.95th=[ 202], 00:09:15.041 | 99.99th=[ 343] 00:09:15.041 bw ( KiB/s): min=14304, max=14304, per=20.56%, avg=14304.00, stdev= 0.00, samples=1 00:09:15.041 iops : min= 3576, max= 3576, avg=3576.00, stdev= 0.00, samples=1 00:09:15.041 lat (usec) : 100=10.98%, 250=89.00%, 500=0.01% 00:09:15.041 cpu : usr=4.80%, sys=9.30%, ctx=6712, majf=0, minf=1 00:09:15.041 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:15.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.041 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.041 issued rwts: total=3128,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:15.041 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:15.041 00:09:15.041 Run status group 0 (all jobs): 00:09:15.041 READ: bw=62.4MiB/s (65.4MB/s), 12.2MiB/s-19.0MiB/s (12.8MB/s-19.9MB/s), io=62.5MiB (65.5MB), run=1001-1001msec 00:09:15.041 WRITE: bw=67.9MiB/s (71.2MB/s), 14.0MiB/s-20.0MiB/s (14.7MB/s-20.9MB/s), io=68.0MiB (71.3MB), run=1001-1001msec 00:09:15.041 00:09:15.041 Disk stats (read/write): 00:09:15.041 nvme0n1: ios=4145/4526, merge=0/0, ticks=331/325, in_queue=656, util=84.47% 00:09:15.041 nvme0n2: ios=4096/4525, merge=0/0, ticks=321/340, in_queue=661, util=85.42% 00:09:15.041 nvme0n3: ios=2560/3052, merge=0/0, ticks=333/380, in_queue=713, util=88.49% 00:09:15.041 nvme0n4: ios=2560/3052, merge=0/0, ticks=337/381, in_queue=718, util=89.53% 00:09:15.041 10:37:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:15.041 [global] 00:09:15.041 thread=1 00:09:15.041 invalidate=1 00:09:15.041 rw=write 00:09:15.041 time_based=1 00:09:15.041 runtime=1 00:09:15.041 ioengine=libaio 00:09:15.041 direct=1 00:09:15.041 bs=4096 00:09:15.041 iodepth=128 00:09:15.041 norandommap=0 00:09:15.041 numjobs=1 00:09:15.041 00:09:15.041 verify_dump=1 00:09:15.041 verify_backlog=512 00:09:15.041 verify_state_save=0 00:09:15.041 do_verify=1 00:09:15.041 verify=crc32c-intel 00:09:15.041 [job0] 00:09:15.041 filename=/dev/nvme0n1 00:09:15.041 [job1] 00:09:15.041 filename=/dev/nvme0n2 00:09:15.041 [job2] 00:09:15.041 filename=/dev/nvme0n3 00:09:15.041 [job3] 00:09:15.041 filename=/dev/nvme0n4 00:09:15.041 Could not set queue depth (nvme0n1) 00:09:15.041 Could not set queue depth (nvme0n2) 00:09:15.041 Could not set queue depth (nvme0n3) 00:09:15.041 Could not set queue depth (nvme0n4) 00:09:15.300 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:15.300 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:15.300 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:15.300 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:15.300 fio-3.35 00:09:15.300 Starting 4 threads 00:09:16.718 00:09:16.718 job0: (groupid=0, jobs=1): err= 0: pid=3681359: Thu Nov 7 10:37:44 2024 00:09:16.718 read: IOPS=3662, BW=14.3MiB/s (15.0MB/s)(14.4MiB/1005msec) 00:09:16.718 slat (usec): min=2, max=1072, avg=130.62, stdev=250.81 00:09:16.718 clat (usec): min=3477, max=18760, avg=16688.69, stdev=1631.06 00:09:16.718 lat (usec): min=3978, max=18768, avg=16819.31, stdev=1620.97 00:09:16.718 clat percentiles (usec): 00:09:16.718 | 1.00th=[ 8356], 5.00th=[15401], 10.00th=[15533], 20.00th=[15926], 00:09:16.718 | 30.00th=[15926], 40.00th=[16057], 50.00th=[16909], 60.00th=[17433], 00:09:16.718 | 70.00th=[17695], 80.00th=[17957], 90.00th=[18220], 95.00th=[18220], 00:09:16.718 | 99.00th=[18482], 99.50th=[18744], 99.90th=[18744], 99.95th=[18744], 00:09:16.718 | 99.99th=[18744] 00:09:16.718 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:09:16.718 slat (usec): min=2, max=974, avg=122.69, stdev=231.23 00:09:16.718 clat (usec): min=11308, max=19430, avg=15989.89, stdev=1029.55 00:09:16.718 lat (usec): min=11388, max=19440, avg=16112.58, stdev=1017.70 00:09:16.718 clat percentiles (usec): 00:09:16.718 | 1.00th=[14091], 5.00th=[14615], 10.00th=[14615], 20.00th=[15008], 00:09:16.718 | 30.00th=[15008], 40.00th=[15401], 50.00th=[16319], 60.00th=[16581], 00:09:16.718 | 70.00th=[16712], 80.00th=[16909], 90.00th=[17171], 95.00th=[17433], 00:09:16.718 | 99.00th=[17957], 99.50th=[18220], 99.90th=[18744], 99.95th=[19530], 00:09:16.718 | 99.99th=[19530] 00:09:16.718 bw ( KiB/s): min=16144, max=16384, per=20.46%, avg=16264.00, stdev=169.71, samples=2 00:09:16.718 iops : min= 4036, max= 4096, avg=4066.00, stdev=42.43, samples=2 00:09:16.718 lat (msec) : 4=0.04%, 10=0.63%, 20=99.33% 00:09:16.718 cpu : usr=2.99%, sys=4.08%, ctx=3405, majf=0, minf=1 00:09:16.718 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:16.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.718 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:16.718 issued rwts: total=3681,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:16.718 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:16.718 job1: (groupid=0, jobs=1): err= 0: pid=3681360: Thu Nov 7 10:37:44 2024 00:09:16.718 read: IOPS=3667, BW=14.3MiB/s (15.0MB/s)(14.4MiB/1005msec) 00:09:16.718 slat (usec): min=2, max=1128, avg=130.15, stdev=244.94 00:09:16.718 clat (usec): min=3477, max=18762, avg=16667.99, stdev=1688.88 00:09:16.718 lat (usec): min=3923, max=18771, avg=16798.14, stdev=1681.01 00:09:16.718 clat percentiles (usec): 00:09:16.718 | 1.00th=[ 7898], 5.00th=[15401], 10.00th=[15533], 20.00th=[15926], 00:09:16.718 | 30.00th=[15926], 40.00th=[16057], 50.00th=[16909], 60.00th=[17433], 00:09:16.718 | 70.00th=[17695], 80.00th=[17957], 90.00th=[18220], 95.00th=[18220], 00:09:16.718 | 99.00th=[18482], 99.50th=[18744], 99.90th=[18744], 99.95th=[18744], 00:09:16.718 | 99.99th=[18744] 00:09:16.718 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:09:16.718 slat (usec): min=2, max=935, avg=122.86, stdev=230.26 00:09:16.718 clat (usec): min=11299, max=19433, avg=15989.57, stdev=1016.24 00:09:16.718 lat (usec): min=11341, max=19461, avg=16112.43, stdev=1005.00 00:09:16.718 clat percentiles (usec): 00:09:16.718 | 1.00th=[14091], 5.00th=[14615], 10.00th=[14746], 20.00th=[15008], 00:09:16.718 | 30.00th=[15008], 40.00th=[15401], 50.00th=[16319], 60.00th=[16581], 00:09:16.718 | 70.00th=[16712], 80.00th=[16909], 90.00th=[17171], 95.00th=[17433], 00:09:16.718 | 99.00th=[17957], 99.50th=[18220], 99.90th=[18744], 99.95th=[18744], 00:09:16.718 | 99.99th=[19530] 00:09:16.718 bw ( KiB/s): min=16184, max=16384, per=20.49%, avg=16284.00, stdev=141.42, samples=2 00:09:16.718 iops : min= 4046, max= 4096, avg=4071.00, stdev=35.36, samples=2 00:09:16.718 lat (msec) : 4=0.10%, 10=0.63%, 20=99.27% 00:09:16.718 cpu : usr=3.19%, sys=4.08%, ctx=3393, majf=0, minf=2 00:09:16.718 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:16.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.718 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:16.718 issued rwts: total=3686,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:16.718 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:16.718 job2: (groupid=0, jobs=1): err= 0: pid=3681361: Thu Nov 7 10:37:44 2024 00:09:16.718 read: IOPS=7190, BW=28.1MiB/s (29.5MB/s)(28.2MiB/1005msec) 00:09:16.718 slat (usec): min=2, max=2808, avg=66.16, stdev=190.16 00:09:16.718 clat (usec): min=2623, max=16500, avg=8614.35, stdev=3740.37 00:09:16.718 lat (usec): min=5066, max=16530, avg=8680.51, stdev=3768.45 00:09:16.718 clat percentiles (usec): 00:09:16.718 | 1.00th=[ 5735], 5.00th=[ 6128], 10.00th=[ 6259], 20.00th=[ 6456], 00:09:16.718 | 30.00th=[ 6587], 40.00th=[ 6718], 50.00th=[ 6783], 60.00th=[ 6915], 00:09:16.718 | 70.00th=[ 7046], 80.00th=[15270], 90.00th=[15926], 95.00th=[16057], 00:09:16.718 | 99.00th=[16057], 99.50th=[16057], 99.90th=[16450], 99.95th=[16450], 00:09:16.718 | 99.99th=[16450] 00:09:16.718 write: IOPS=7641, BW=29.9MiB/s (31.3MB/s)(30.0MiB/1005msec); 0 zone resets 00:09:16.718 slat (usec): min=2, max=1373, avg=63.55, stdev=171.93 00:09:16.718 clat (usec): min=3960, max=16178, avg=8420.92, stdev=3646.99 00:09:16.718 lat (usec): min=4024, max=16212, avg=8484.47, stdev=3674.63 00:09:16.718 clat percentiles (usec): 00:09:16.718 | 1.00th=[ 5473], 5.00th=[ 5735], 10.00th=[ 5932], 20.00th=[ 6128], 00:09:16.718 | 30.00th=[ 6259], 40.00th=[ 6390], 50.00th=[ 6521], 60.00th=[ 6652], 00:09:16.718 | 70.00th=[ 6915], 80.00th=[14615], 90.00th=[15008], 95.00th=[15139], 00:09:16.718 | 99.00th=[15533], 99.50th=[15664], 99.90th=[15795], 99.95th=[15926], 00:09:16.718 | 99.99th=[16188] 00:09:16.718 bw ( KiB/s): min=20480, max=40400, per=38.30%, avg=30440.00, stdev=14085.57, samples=2 00:09:16.718 iops : min= 5120, max=10100, avg=7610.00, stdev=3521.39, samples=2 00:09:16.718 lat (msec) : 4=0.01%, 10=77.00%, 20=22.98% 00:09:16.718 cpu : usr=4.48%, sys=7.67%, ctx=3121, majf=0, minf=1 00:09:16.718 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:16.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.718 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:16.718 issued rwts: total=7226,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:16.718 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:16.718 job3: (groupid=0, jobs=1): err= 0: pid=3681362: Thu Nov 7 10:37:44 2024 00:09:16.718 read: IOPS=3664, BW=14.3MiB/s (15.0MB/s)(14.4MiB/1005msec) 00:09:16.718 slat (usec): min=2, max=1086, avg=130.46, stdev=246.26 00:09:16.718 clat (usec): min=3452, max=18762, avg=16673.63, stdev=1652.50 00:09:16.718 lat (usec): min=3940, max=18770, avg=16804.09, stdev=1642.91 00:09:16.718 clat percentiles (usec): 00:09:16.718 | 1.00th=[ 7898], 5.00th=[15270], 10.00th=[15533], 20.00th=[15926], 00:09:16.718 | 30.00th=[15926], 40.00th=[16057], 50.00th=[16909], 60.00th=[17433], 00:09:16.718 | 70.00th=[17695], 80.00th=[17957], 90.00th=[18220], 95.00th=[18220], 00:09:16.718 | 99.00th=[18482], 99.50th=[18744], 99.90th=[18744], 99.95th=[18744], 00:09:16.718 | 99.99th=[18744] 00:09:16.718 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:09:16.718 slat (usec): min=2, max=954, avg=122.69, stdev=230.49 00:09:16.718 clat (usec): min=11691, max=19344, avg=15990.94, stdev=1020.90 00:09:16.718 lat (usec): min=11702, max=19348, avg=16113.64, stdev=1008.89 00:09:16.718 clat percentiles (usec): 00:09:16.718 | 1.00th=[14091], 5.00th=[14615], 10.00th=[14746], 20.00th=[15008], 00:09:16.718 | 30.00th=[15008], 40.00th=[15401], 50.00th=[16319], 60.00th=[16581], 00:09:16.718 | 70.00th=[16712], 80.00th=[16909], 90.00th=[17171], 95.00th=[17433], 00:09:16.718 | 99.00th=[17957], 99.50th=[18220], 99.90th=[19268], 99.95th=[19268], 00:09:16.718 | 99.99th=[19268] 00:09:16.718 bw ( KiB/s): min=16160, max=16384, per=20.47%, avg=16272.00, stdev=158.39, samples=2 00:09:16.718 iops : min= 4040, max= 4096, avg=4068.00, stdev=39.60, samples=2 00:09:16.718 lat (msec) : 4=0.06%, 10=0.62%, 20=99.32% 00:09:16.718 cpu : usr=3.19%, sys=3.98%, ctx=3368, majf=0, minf=1 00:09:16.718 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:16.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.718 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:16.718 issued rwts: total=3683,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:16.718 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:16.718 00:09:16.718 Run status group 0 (all jobs): 00:09:16.718 READ: bw=71.0MiB/s (74.5MB/s), 14.3MiB/s-28.1MiB/s (15.0MB/s-29.5MB/s), io=71.4MiB (74.9MB), run=1005-1005msec 00:09:16.718 WRITE: bw=77.6MiB/s (81.4MB/s), 15.9MiB/s-29.9MiB/s (16.7MB/s-31.3MB/s), io=78.0MiB (81.8MB), run=1005-1005msec 00:09:16.718 00:09:16.718 Disk stats (read/write): 00:09:16.718 nvme0n1: ios=3121/3453, merge=0/0, ticks=12833/13459, in_queue=26292, util=84.37% 00:09:16.718 nvme0n2: ios=3072/3453, merge=0/0, ticks=12885/13458, in_queue=26343, util=85.50% 00:09:16.718 nvme0n3: ios=5632/6091, merge=0/0, ticks=12707/13254, in_queue=25961, util=88.48% 00:09:16.718 nvme0n4: ios=3072/3454, merge=0/0, ticks=12869/13481, in_queue=26350, util=89.52% 00:09:16.718 10:37:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:16.718 [global] 00:09:16.718 thread=1 00:09:16.718 invalidate=1 00:09:16.718 rw=randwrite 00:09:16.718 time_based=1 00:09:16.718 runtime=1 00:09:16.718 ioengine=libaio 00:09:16.718 direct=1 00:09:16.719 bs=4096 00:09:16.719 iodepth=128 00:09:16.719 norandommap=0 00:09:16.719 numjobs=1 00:09:16.719 00:09:16.719 verify_dump=1 00:09:16.719 verify_backlog=512 00:09:16.719 verify_state_save=0 00:09:16.719 do_verify=1 00:09:16.719 verify=crc32c-intel 00:09:16.719 [job0] 00:09:16.719 filename=/dev/nvme0n1 00:09:16.719 [job1] 00:09:16.719 filename=/dev/nvme0n2 00:09:16.719 [job2] 00:09:16.719 filename=/dev/nvme0n3 00:09:16.719 [job3] 00:09:16.719 filename=/dev/nvme0n4 00:09:16.719 Could not set queue depth (nvme0n1) 00:09:16.719 Could not set queue depth (nvme0n2) 00:09:16.719 Could not set queue depth (nvme0n3) 00:09:16.719 Could not set queue depth (nvme0n4) 00:09:16.990 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:16.990 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:16.990 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:16.990 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:16.990 fio-3.35 00:09:16.990 Starting 4 threads 00:09:18.399 00:09:18.399 job0: (groupid=0, jobs=1): err= 0: pid=3681779: Thu Nov 7 10:37:45 2024 00:09:18.399 read: IOPS=9811, BW=38.3MiB/s (40.2MB/s)(38.4MiB/1001msec) 00:09:18.399 slat (usec): min=2, max=1377, avg=49.59, stdev=185.77 00:09:18.399 clat (usec): min=531, max=7969, avg=6484.22, stdev=743.89 00:09:18.399 lat (usec): min=1569, max=7995, avg=6533.81, stdev=726.08 00:09:18.399 clat percentiles (usec): 00:09:18.399 | 1.00th=[ 4883], 5.00th=[ 5473], 10.00th=[ 5604], 20.00th=[ 5735], 00:09:18.399 | 30.00th=[ 5866], 40.00th=[ 6456], 50.00th=[ 6783], 60.00th=[ 6915], 00:09:18.399 | 70.00th=[ 7046], 80.00th=[ 7111], 90.00th=[ 7242], 95.00th=[ 7373], 00:09:18.399 | 99.00th=[ 7504], 99.50th=[ 7570], 99.90th=[ 7570], 99.95th=[ 7635], 00:09:18.399 | 99.99th=[ 7963] 00:09:18.399 write: IOPS=10.2k, BW=40.0MiB/s (41.9MB/s)(40.0MiB/1001msec); 0 zone resets 00:09:18.399 slat (usec): min=2, max=1273, avg=46.57, stdev=171.33 00:09:18.399 clat (usec): min=4290, max=7481, avg=6170.76, stdev=691.95 00:09:18.399 lat (usec): min=4298, max=7936, avg=6217.33, stdev=676.25 00:09:18.399 clat percentiles (usec): 00:09:18.399 | 1.00th=[ 4686], 5.00th=[ 5211], 10.00th=[ 5276], 20.00th=[ 5407], 00:09:18.399 | 30.00th=[ 5473], 40.00th=[ 5932], 50.00th=[ 6521], 60.00th=[ 6652], 00:09:18.399 | 70.00th=[ 6718], 80.00th=[ 6783], 90.00th=[ 6915], 95.00th=[ 7046], 00:09:18.399 | 99.00th=[ 7242], 99.50th=[ 7308], 99.90th=[ 7439], 99.95th=[ 7439], 00:09:18.399 | 99.99th=[ 7504] 00:09:18.399 bw ( KiB/s): min=44448, max=44448, per=44.42%, avg=44448.00, stdev= 0.00, samples=1 00:09:18.399 iops : min=11112, max=11112, avg=11112.00, stdev= 0.00, samples=1 00:09:18.399 lat (usec) : 750=0.01% 00:09:18.399 lat (msec) : 2=0.06%, 4=0.23%, 10=99.71% 00:09:18.399 cpu : usr=4.50%, sys=8.10%, ctx=1267, majf=0, minf=1 00:09:18.399 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:18.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.399 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:18.399 issued rwts: total=9821,10240,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:18.399 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:18.399 job1: (groupid=0, jobs=1): err= 0: pid=3681780: Thu Nov 7 10:37:45 2024 00:09:18.399 read: IOPS=3882, BW=15.2MiB/s (15.9MB/s)(15.2MiB/1003msec) 00:09:18.399 slat (usec): min=2, max=1403, avg=125.75, stdev=303.67 00:09:18.399 clat (usec): min=2066, max=20340, avg=16060.41, stdev=2265.03 00:09:18.399 lat (usec): min=2741, max=20360, avg=16186.16, stdev=2258.66 00:09:18.399 clat percentiles (usec): 00:09:18.399 | 1.00th=[ 6587], 5.00th=[14222], 10.00th=[14615], 20.00th=[14877], 00:09:18.399 | 30.00th=[15139], 40.00th=[15270], 50.00th=[15401], 60.00th=[15664], 00:09:18.399 | 70.00th=[15926], 80.00th=[18744], 90.00th=[19268], 95.00th=[19530], 00:09:18.399 | 99.00th=[19530], 99.50th=[19530], 99.90th=[20055], 99.95th=[20317], 00:09:18.399 | 99.99th=[20317] 00:09:18.399 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:09:18.399 slat (usec): min=2, max=1472, avg=120.28, stdev=287.55 00:09:18.399 clat (usec): min=12667, max=19942, avg=15624.76, stdev=1716.43 00:09:18.399 lat (usec): min=13622, max=19946, avg=15745.04, stdev=1707.35 00:09:18.399 clat percentiles (usec): 00:09:18.399 | 1.00th=[13304], 5.00th=[13829], 10.00th=[14091], 20.00th=[14353], 00:09:18.399 | 30.00th=[14484], 40.00th=[14615], 50.00th=[14877], 60.00th=[15008], 00:09:18.399 | 70.00th=[16319], 80.00th=[17957], 90.00th=[18220], 95.00th=[18482], 00:09:18.399 | 99.00th=[19006], 99.50th=[19268], 99.90th=[19268], 99.95th=[19530], 00:09:18.399 | 99.99th=[20055] 00:09:18.399 bw ( KiB/s): min=16384, max=16384, per=16.38%, avg=16384.00, stdev= 0.00, samples=2 00:09:18.399 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:09:18.399 lat (msec) : 4=0.30%, 10=0.51%, 20=99.12%, 50=0.06% 00:09:18.399 cpu : usr=2.79%, sys=3.79%, ctx=1799, majf=0, minf=1 00:09:18.399 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:18.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.399 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:18.399 issued rwts: total=3894,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:18.399 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:18.399 job2: (groupid=0, jobs=1): err= 0: pid=3681781: Thu Nov 7 10:37:45 2024 00:09:18.399 read: IOPS=3880, BW=15.2MiB/s (15.9MB/s)(15.2MiB/1003msec) 00:09:18.399 slat (usec): min=2, max=1304, avg=125.68, stdev=302.37 00:09:18.399 clat (usec): min=2126, max=19678, avg=16068.25, stdev=2229.10 00:09:18.399 lat (usec): min=2806, max=20211, avg=16193.93, stdev=2221.91 00:09:18.399 clat percentiles (usec): 00:09:18.399 | 1.00th=[ 6652], 5.00th=[14222], 10.00th=[14615], 20.00th=[14877], 00:09:18.399 | 30.00th=[15139], 40.00th=[15270], 50.00th=[15401], 60.00th=[15664], 00:09:18.399 | 70.00th=[15926], 80.00th=[18744], 90.00th=[19268], 95.00th=[19268], 00:09:18.399 | 99.00th=[19530], 99.50th=[19530], 99.90th=[19530], 99.95th=[19792], 00:09:18.399 | 99.99th=[19792] 00:09:18.399 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:09:18.399 slat (usec): min=2, max=1443, avg=120.44, stdev=287.46 00:09:18.399 clat (usec): min=12781, max=19266, avg=15622.85, stdev=1721.24 00:09:18.399 lat (usec): min=13648, max=19270, avg=15743.28, stdev=1711.93 00:09:18.399 clat percentiles (usec): 00:09:18.399 | 1.00th=[13304], 5.00th=[13698], 10.00th=[14091], 20.00th=[14353], 00:09:18.399 | 30.00th=[14484], 40.00th=[14615], 50.00th=[14877], 60.00th=[15008], 00:09:18.399 | 70.00th=[16319], 80.00th=[17957], 90.00th=[18220], 95.00th=[18482], 00:09:18.399 | 99.00th=[19006], 99.50th=[19268], 99.90th=[19268], 99.95th=[19268], 00:09:18.399 | 99.99th=[19268] 00:09:18.399 bw ( KiB/s): min=16384, max=16384, per=16.38%, avg=16384.00, stdev= 0.00, samples=2 00:09:18.399 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:09:18.399 lat (msec) : 4=0.25%, 10=0.56%, 20=99.19% 00:09:18.399 cpu : usr=2.30%, sys=4.29%, ctx=1705, majf=0, minf=1 00:09:18.399 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:18.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.399 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:18.399 issued rwts: total=3892,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:18.399 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:18.399 job3: (groupid=0, jobs=1): err= 0: pid=3681782: Thu Nov 7 10:37:45 2024 00:09:18.399 read: IOPS=6164, BW=24.1MiB/s (25.2MB/s)(24.1MiB/1002msec) 00:09:18.399 slat (usec): min=2, max=1204, avg=76.92, stdev=237.44 00:09:18.399 clat (usec): min=455, max=20012, avg=10004.98, stdev=4103.16 00:09:18.399 lat (usec): min=1421, max=20254, avg=10081.89, stdev=4133.69 00:09:18.399 clat percentiles (usec): 00:09:18.399 | 1.00th=[ 7242], 5.00th=[ 7504], 10.00th=[ 7635], 20.00th=[ 7898], 00:09:18.399 | 30.00th=[ 8029], 40.00th=[ 8160], 50.00th=[ 8291], 60.00th=[ 8586], 00:09:18.399 | 70.00th=[ 8717], 80.00th=[ 9110], 90.00th=[19268], 95.00th=[19268], 00:09:18.399 | 99.00th=[19530], 99.50th=[19530], 99.90th=[19530], 99.95th=[19792], 00:09:18.399 | 99.99th=[20055] 00:09:18.399 write: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec); 0 zone resets 00:09:18.399 slat (usec): min=2, max=1451, avg=74.88, stdev=227.14 00:09:18.399 clat (usec): min=3194, max=19950, avg=9707.59, stdev=3910.52 00:09:18.399 lat (usec): min=3197, max=19954, avg=9782.48, stdev=3939.84 00:09:18.399 clat percentiles (usec): 00:09:18.399 | 1.00th=[ 5997], 5.00th=[ 7177], 10.00th=[ 7373], 20.00th=[ 7635], 00:09:18.399 | 30.00th=[ 7701], 40.00th=[ 7832], 50.00th=[ 8029], 60.00th=[ 8225], 00:09:18.399 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[18220], 95.00th=[18220], 00:09:18.399 | 99.00th=[18744], 99.50th=[19006], 99.90th=[19268], 99.95th=[19530], 00:09:18.399 | 99.99th=[20055] 00:09:18.399 bw ( KiB/s): min=20480, max=20480, per=20.47%, avg=20480.00, stdev= 0.00, samples=1 00:09:18.399 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:09:18.399 lat (usec) : 500=0.01% 00:09:18.399 lat (msec) : 2=0.11%, 4=0.25%, 10=82.08%, 20=17.55%, 50=0.01% 00:09:18.399 cpu : usr=3.10%, sys=6.29%, ctx=1613, majf=0, minf=2 00:09:18.399 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:09:18.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.399 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:18.399 issued rwts: total=6177,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:18.399 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:18.399 00:09:18.399 Run status group 0 (all jobs): 00:09:18.399 READ: bw=92.6MiB/s (97.1MB/s), 15.2MiB/s-38.3MiB/s (15.9MB/s-40.2MB/s), io=92.9MiB (97.4MB), run=1001-1003msec 00:09:18.399 WRITE: bw=97.7MiB/s (102MB/s), 16.0MiB/s-40.0MiB/s (16.7MB/s-41.9MB/s), io=98.0MiB (103MB), run=1001-1003msec 00:09:18.399 00:09:18.399 Disk stats (read/write): 00:09:18.399 nvme0n1: ios=8241/8242, merge=0/0, ticks=16976/15912, in_queue=32888, util=81.56% 00:09:18.399 nvme0n2: ios=3072/3242, merge=0/0, ticks=12673/12853, in_queue=25526, util=82.96% 00:09:18.399 nvme0n3: ios=3072/3242, merge=0/0, ticks=12678/12869, in_queue=25547, util=87.54% 00:09:18.399 nvme0n4: ios=4651/5120, merge=0/0, ticks=12282/13106, in_queue=25388, util=89.18% 00:09:18.399 10:37:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:18.399 10:37:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3682043 00:09:18.400 10:37:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:18.400 10:37:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:18.400 [global] 00:09:18.400 thread=1 00:09:18.400 invalidate=1 00:09:18.400 rw=read 00:09:18.400 time_based=1 00:09:18.400 runtime=10 00:09:18.400 ioengine=libaio 00:09:18.400 direct=1 00:09:18.400 bs=4096 00:09:18.400 iodepth=1 00:09:18.400 norandommap=1 00:09:18.400 numjobs=1 00:09:18.400 00:09:18.400 [job0] 00:09:18.400 filename=/dev/nvme0n1 00:09:18.400 [job1] 00:09:18.400 filename=/dev/nvme0n2 00:09:18.400 [job2] 00:09:18.400 filename=/dev/nvme0n3 00:09:18.400 [job3] 00:09:18.400 filename=/dev/nvme0n4 00:09:18.400 Could not set queue depth (nvme0n1) 00:09:18.400 Could not set queue depth (nvme0n2) 00:09:18.400 Could not set queue depth (nvme0n3) 00:09:18.400 Could not set queue depth (nvme0n4) 00:09:18.666 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:18.666 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:18.666 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:18.666 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:18.666 fio-3.35 00:09:18.666 Starting 4 threads 00:09:21.201 10:37:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:21.460 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=88477696, buflen=4096 00:09:21.460 fio: pid=3682221, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:21.460 10:37:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:21.460 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=80134144, buflen=4096 00:09:21.460 fio: pid=3682217, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:21.460 10:37:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:21.460 10:37:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:21.719 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=24633344, buflen=4096 00:09:21.719 fio: pid=3682196, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:21.719 10:37:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:21.719 10:37:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:21.979 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=48115712, buflen=4096 00:09:21.979 fio: pid=3682206, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:21.979 10:37:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:21.979 10:37:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:21.979 00:09:21.979 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3682196: Thu Nov 7 10:37:49 2024 00:09:21.979 read: IOPS=7387, BW=28.9MiB/s (30.3MB/s)(87.5MiB/3032msec) 00:09:21.979 slat (usec): min=6, max=14006, avg=10.83, stdev=147.28 00:09:21.979 clat (usec): min=38, max=387, avg=122.89, stdev=31.25 00:09:21.979 lat (usec): min=58, max=14137, avg=133.72, stdev=150.60 00:09:21.979 clat percentiles (usec): 00:09:21.979 | 1.00th=[ 58], 5.00th=[ 73], 10.00th=[ 76], 20.00th=[ 83], 00:09:21.979 | 30.00th=[ 116], 40.00th=[ 124], 50.00th=[ 128], 60.00th=[ 133], 00:09:21.979 | 70.00th=[ 143], 80.00th=[ 153], 90.00th=[ 161], 95.00th=[ 165], 00:09:21.979 | 99.00th=[ 176], 99.50th=[ 182], 99.90th=[ 200], 99.95th=[ 212], 00:09:21.979 | 99.99th=[ 223] 00:09:21.979 bw ( KiB/s): min=23936, max=34360, per=25.02%, avg=28177.60, stdev=4158.04, samples=5 00:09:21.979 iops : min= 5984, max= 8590, avg=7044.40, stdev=1039.51, samples=5 00:09:21.979 lat (usec) : 50=0.01%, 100=24.71%, 250=75.27%, 500=0.01% 00:09:21.979 cpu : usr=3.03%, sys=10.82%, ctx=22403, majf=0, minf=1 00:09:21.979 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:21.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.979 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.979 issued rwts: total=22399,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:21.979 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:21.979 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3682206: Thu Nov 7 10:37:49 2024 00:09:21.979 read: IOPS=8637, BW=33.7MiB/s (35.4MB/s)(110MiB/3257msec) 00:09:21.979 slat (usec): min=8, max=15958, avg=11.52, stdev=167.89 00:09:21.979 clat (usec): min=37, max=21447, avg=102.51, stdev=140.50 00:09:21.979 lat (usec): min=57, max=21455, avg=114.03, stdev=218.83 00:09:21.979 clat percentiles (usec): 00:09:21.979 | 1.00th=[ 55], 5.00th=[ 59], 10.00th=[ 65], 20.00th=[ 77], 00:09:21.979 | 30.00th=[ 81], 40.00th=[ 85], 50.00th=[ 90], 60.00th=[ 111], 00:09:21.979 | 70.00th=[ 124], 80.00th=[ 130], 90.00th=[ 147], 95.00th=[ 155], 00:09:21.979 | 99.00th=[ 169], 99.50th=[ 174], 99.90th=[ 194], 99.95th=[ 204], 00:09:21.979 | 99.99th=[ 611] 00:09:21.979 bw ( KiB/s): min=29072, max=42392, per=29.75%, avg=33499.33, stdev=5342.36, samples=6 00:09:21.979 iops : min= 7268, max=10598, avg=8374.83, stdev=1335.59, samples=6 00:09:21.979 lat (usec) : 50=0.02%, 100=57.04%, 250=42.92%, 500=0.01%, 750=0.01% 00:09:21.979 lat (msec) : 10=0.01%, 50=0.01% 00:09:21.979 cpu : usr=3.90%, sys=12.32%, ctx=28139, majf=0, minf=2 00:09:21.979 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:21.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.979 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.979 issued rwts: total=28132,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:21.979 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:21.979 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3682217: Thu Nov 7 10:37:49 2024 00:09:21.979 read: IOPS=6867, BW=26.8MiB/s (28.1MB/s)(76.4MiB/2849msec) 00:09:21.979 slat (usec): min=6, max=11897, avg=10.85, stdev=120.07 00:09:21.979 clat (usec): min=68, max=8632, avg=132.13, stdev=64.99 00:09:21.979 lat (usec): min=78, max=12001, avg=142.98, stdev=136.32 00:09:21.979 clat percentiles (usec): 00:09:21.979 | 1.00th=[ 80], 5.00th=[ 91], 10.00th=[ 101], 20.00th=[ 117], 00:09:21.979 | 30.00th=[ 123], 40.00th=[ 126], 50.00th=[ 130], 60.00th=[ 135], 00:09:21.979 | 70.00th=[ 145], 80.00th=[ 153], 90.00th=[ 161], 95.00th=[ 165], 00:09:21.979 | 99.00th=[ 186], 99.50th=[ 196], 99.90th=[ 212], 99.95th=[ 221], 00:09:21.979 | 99.99th=[ 644] 00:09:21.979 bw ( KiB/s): min=23768, max=32048, per=24.57%, avg=27664.00, stdev=3408.09, samples=5 00:09:21.979 iops : min= 5942, max= 8012, avg=6916.00, stdev=852.02, samples=5 00:09:21.979 lat (usec) : 100=9.58%, 250=90.40%, 500=0.01%, 750=0.01% 00:09:21.979 lat (msec) : 10=0.01% 00:09:21.979 cpu : usr=3.09%, sys=10.39%, ctx=19567, majf=0, minf=2 00:09:21.979 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:21.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.979 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.979 issued rwts: total=19565,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:21.979 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:21.979 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3682221: Thu Nov 7 10:37:49 2024 00:09:21.979 read: IOPS=8198, BW=32.0MiB/s (33.6MB/s)(84.4MiB/2635msec) 00:09:21.979 slat (nsec): min=8319, max=36429, avg=9049.10, stdev=961.96 00:09:21.979 clat (usec): min=67, max=334, avg=111.12, stdev=27.42 00:09:21.979 lat (usec): min=76, max=343, avg=120.17, stdev=27.56 00:09:21.979 clat percentiles (usec): 00:09:21.979 | 1.00th=[ 78], 5.00th=[ 81], 10.00th=[ 83], 20.00th=[ 86], 00:09:21.979 | 30.00th=[ 89], 40.00th=[ 93], 50.00th=[ 101], 60.00th=[ 122], 00:09:21.979 | 70.00th=[ 126], 80.00th=[ 133], 90.00th=[ 155], 95.00th=[ 163], 00:09:21.979 | 99.00th=[ 174], 99.50th=[ 178], 99.90th=[ 206], 99.95th=[ 210], 00:09:21.979 | 99.99th=[ 223] 00:09:21.979 bw ( KiB/s): min=24312, max=40424, per=29.00%, avg=32662.40, stdev=6791.95, samples=5 00:09:21.979 iops : min= 6078, max=10106, avg=8165.60, stdev=1697.99, samples=5 00:09:21.979 lat (usec) : 100=49.24%, 250=50.75%, 500=0.01% 00:09:21.979 cpu : usr=3.61%, sys=11.73%, ctx=21602, majf=0, minf=2 00:09:21.979 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:21.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.979 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.979 issued rwts: total=21602,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:21.979 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:21.979 00:09:21.979 Run status group 0 (all jobs): 00:09:21.980 READ: bw=110MiB/s (115MB/s), 26.8MiB/s-33.7MiB/s (28.1MB/s-35.4MB/s), io=358MiB (376MB), run=2635-3257msec 00:09:21.980 00:09:21.980 Disk stats (read/write): 00:09:21.980 nvme0n1: ios=20320/0, merge=0/0, ticks=2398/0, in_queue=2398, util=93.95% 00:09:21.980 nvme0n2: ios=25777/0, merge=0/0, ticks=2512/0, in_queue=2512, util=93.74% 00:09:21.980 nvme0n3: ios=19564/0, merge=0/0, ticks=2407/0, in_queue=2407, util=95.41% 00:09:21.980 nvme0n4: ios=21207/0, merge=0/0, ticks=2165/0, in_queue=2165, util=96.46% 00:09:22.238 10:37:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:22.238 10:37:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:22.497 10:37:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:22.497 10:37:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:22.756 10:37:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:22.757 10:37:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:23.015 10:37:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:23.015 10:37:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:23.015 10:37:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:23.015 10:37:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3682043 00:09:23.015 10:37:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:23.016 10:37:50 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:23.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.953 10:37:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:23.953 10:37:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:09:23.953 10:37:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:09:23.953 10:37:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:23.953 10:37:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:09:23.953 10:37:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:23.953 10:37:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:09:23.953 10:37:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:23.953 10:37:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:23.953 nvmf hotplug test: fio failed as expected 00:09:23.953 10:37:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:24.212 10:37:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:24.212 10:37:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:24.212 10:37:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:24.212 10:37:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:24.212 10:37:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:24.212 10:37:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:24.212 10:37:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:24.212 10:37:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:24.212 10:37:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:24.212 10:37:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:24.212 10:37:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:24.212 10:37:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:24.212 rmmod nvme_rdma 00:09:24.212 rmmod nvme_fabrics 00:09:24.212 10:37:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:24.212 10:37:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:24.212 10:37:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:24.212 10:37:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3679136 ']' 00:09:24.212 10:37:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3679136 00:09:24.212 10:37:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 3679136 ']' 00:09:24.212 10:37:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 3679136 00:09:24.212 10:37:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:09:24.212 10:37:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:24.212 10:37:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3679136 00:09:24.472 10:37:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:24.472 10:37:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:24.472 10:37:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3679136' 00:09:24.472 killing process with pid 3679136 00:09:24.472 10:37:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 3679136 00:09:24.472 10:37:51 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 3679136 00:09:24.731 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:24.731 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:24.731 00:09:24.731 real 0m26.997s 00:09:24.731 user 2m8.769s 00:09:24.731 sys 0m10.471s 00:09:24.731 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:24.731 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:24.731 ************************************ 00:09:24.731 END TEST nvmf_fio_target 00:09:24.731 ************************************ 00:09:24.731 10:37:52 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:09:24.731 10:37:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:24.731 10:37:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:24.731 10:37:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:24.731 ************************************ 00:09:24.731 START TEST nvmf_bdevio 00:09:24.731 ************************************ 00:09:24.731 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:09:24.731 * Looking for test storage... 00:09:24.731 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:24.731 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:24.731 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:09:24.731 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:24.731 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:24.731 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:24.731 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:24.731 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:24.731 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:24.731 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:24.731 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:24.731 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:24.731 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:24.731 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:24.731 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:24.731 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:24.731 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:24.731 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:24.731 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:24.731 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:24.731 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:24.731 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:24.731 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:24.731 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:24.731 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:24.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.991 --rc genhtml_branch_coverage=1 00:09:24.991 --rc genhtml_function_coverage=1 00:09:24.991 --rc genhtml_legend=1 00:09:24.991 --rc geninfo_all_blocks=1 00:09:24.991 --rc geninfo_unexecuted_blocks=1 00:09:24.991 00:09:24.991 ' 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:24.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.991 --rc genhtml_branch_coverage=1 00:09:24.991 --rc genhtml_function_coverage=1 00:09:24.991 --rc genhtml_legend=1 00:09:24.991 --rc geninfo_all_blocks=1 00:09:24.991 --rc geninfo_unexecuted_blocks=1 00:09:24.991 00:09:24.991 ' 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:24.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.991 --rc genhtml_branch_coverage=1 00:09:24.991 --rc genhtml_function_coverage=1 00:09:24.991 --rc genhtml_legend=1 00:09:24.991 --rc geninfo_all_blocks=1 00:09:24.991 --rc geninfo_unexecuted_blocks=1 00:09:24.991 00:09:24.991 ' 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:24.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.991 --rc genhtml_branch_coverage=1 00:09:24.991 --rc genhtml_function_coverage=1 00:09:24.991 --rc genhtml_legend=1 00:09:24.991 --rc geninfo_all_blocks=1 00:09:24.991 --rc geninfo_unexecuted_blocks=1 00:09:24.991 00:09:24.991 ' 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:24.991 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:24.991 10:37:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:31.628 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:31.628 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:31.628 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:31.628 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # rdma_device_init 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # uname 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:31.628 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:31.628 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:31.628 altname enp217s0f0np0 00:09:31.628 altname ens818f0np0 00:09:31.628 inet 192.168.100.8/24 scope global mlx_0_0 00:09:31.628 valid_lft forever preferred_lft forever 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:31.628 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:31.629 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:31.629 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:31.629 altname enp217s0f1np1 00:09:31.629 altname ens818f1np1 00:09:31.629 inet 192.168.100.9/24 scope global mlx_0_1 00:09:31.629 valid_lft forever preferred_lft forever 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:31.629 192.168.100.9' 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:31.629 192.168.100.9' 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # head -n 1 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:31.629 192.168.100.9' 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # tail -n +2 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # head -n 1 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3686547 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3686547 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 3686547 ']' 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:31.629 10:37:58 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:31.629 [2024-11-07 10:37:58.983745] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:09:31.629 [2024-11-07 10:37:58.983798] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.629 [2024-11-07 10:37:59.059284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:31.629 [2024-11-07 10:37:59.098399] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:31.629 [2024-11-07 10:37:59.098443] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:31.629 [2024-11-07 10:37:59.098453] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:31.629 [2024-11-07 10:37:59.098461] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:31.629 [2024-11-07 10:37:59.098468] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:31.629 [2024-11-07 10:37:59.100338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:31.629 [2024-11-07 10:37:59.100452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:31.629 [2024-11-07 10:37:59.100553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:31.629 [2024-11-07 10:37:59.100555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:31.629 10:37:59 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:31.629 10:37:59 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:09:31.629 10:37:59 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:31.629 10:37:59 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:31.629 10:37:59 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:31.629 10:37:59 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:31.629 10:37:59 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:31.629 10:37:59 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.629 10:37:59 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:31.629 [2024-11-07 10:37:59.266628] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x21866f0/0x218abe0) succeed. 00:09:31.629 [2024-11-07 10:37:59.275976] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2187d80/0x21cc280) succeed. 00:09:31.888 10:37:59 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.888 10:37:59 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:31.888 10:37:59 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.888 10:37:59 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:31.888 Malloc0 00:09:31.889 10:37:59 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.889 10:37:59 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:31.889 10:37:59 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.889 10:37:59 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:31.889 10:37:59 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.889 10:37:59 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:31.889 10:37:59 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.889 10:37:59 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:31.889 10:37:59 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.889 10:37:59 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:31.889 10:37:59 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.889 10:37:59 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:31.889 [2024-11-07 10:37:59.443710] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:31.889 10:37:59 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.889 10:37:59 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:31.889 10:37:59 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:31.889 10:37:59 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:31.889 10:37:59 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:31.889 10:37:59 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:31.889 10:37:59 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:31.889 { 00:09:31.889 "params": { 00:09:31.889 "name": "Nvme$subsystem", 00:09:31.889 "trtype": "$TEST_TRANSPORT", 00:09:31.889 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:31.889 "adrfam": "ipv4", 00:09:31.889 "trsvcid": "$NVMF_PORT", 00:09:31.889 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:31.889 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:31.889 "hdgst": ${hdgst:-false}, 00:09:31.889 "ddgst": ${ddgst:-false} 00:09:31.889 }, 00:09:31.889 "method": "bdev_nvme_attach_controller" 00:09:31.889 } 00:09:31.889 EOF 00:09:31.889 )") 00:09:31.889 10:37:59 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:31.889 10:37:59 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:31.889 10:37:59 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:31.889 10:37:59 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:31.889 "params": { 00:09:31.889 "name": "Nvme1", 00:09:31.889 "trtype": "rdma", 00:09:31.889 "traddr": "192.168.100.8", 00:09:31.889 "adrfam": "ipv4", 00:09:31.889 "trsvcid": "4420", 00:09:31.889 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:31.889 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:31.889 "hdgst": false, 00:09:31.889 "ddgst": false 00:09:31.889 }, 00:09:31.889 "method": "bdev_nvme_attach_controller" 00:09:31.889 }' 00:09:31.889 [2024-11-07 10:37:59.492667] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:09:31.889 [2024-11-07 10:37:59.492718] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3686684 ] 00:09:32.147 [2024-11-07 10:37:59.569686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:32.147 [2024-11-07 10:37:59.612381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.147 [2024-11-07 10:37:59.612477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:32.147 [2024-11-07 10:37:59.612477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.147 I/O targets: 00:09:32.147 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:32.147 00:09:32.147 00:09:32.147 CUnit - A unit testing framework for C - Version 2.1-3 00:09:32.147 http://cunit.sourceforge.net/ 00:09:32.147 00:09:32.147 00:09:32.147 Suite: bdevio tests on: Nvme1n1 00:09:32.147 Test: blockdev write read block ...passed 00:09:32.147 Test: blockdev write zeroes read block ...passed 00:09:32.147 Test: blockdev write zeroes read no split ...passed 00:09:32.147 Test: blockdev write zeroes read split ...passed 00:09:32.147 Test: blockdev write zeroes read split partial ...passed 00:09:32.405 Test: blockdev reset ...[2024-11-07 10:37:59.823128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:32.405 [2024-11-07 10:37:59.845672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:09:32.405 [2024-11-07 10:37:59.872447] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:32.405 passed 00:09:32.405 Test: blockdev write read 8 blocks ...passed 00:09:32.405 Test: blockdev write read size > 128k ...passed 00:09:32.405 Test: blockdev write read invalid size ...passed 00:09:32.405 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:32.406 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:32.406 Test: blockdev write read max offset ...passed 00:09:32.406 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:32.406 Test: blockdev writev readv 8 blocks ...passed 00:09:32.406 Test: blockdev writev readv 30 x 1block ...passed 00:09:32.406 Test: blockdev writev readv block ...passed 00:09:32.406 Test: blockdev writev readv size > 128k ...passed 00:09:32.406 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:32.406 Test: blockdev comparev and writev ...[2024-11-07 10:37:59.875389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:32.406 [2024-11-07 10:37:59.875417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:32.406 [2024-11-07 10:37:59.875429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:32.406 [2024-11-07 10:37:59.875439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:32.406 [2024-11-07 10:37:59.875607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:32.406 [2024-11-07 10:37:59.875619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:32.406 [2024-11-07 10:37:59.875629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:32.406 [2024-11-07 10:37:59.875638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:32.406 [2024-11-07 10:37:59.875807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:32.406 [2024-11-07 10:37:59.875817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:32.406 [2024-11-07 10:37:59.875827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:32.406 [2024-11-07 10:37:59.875836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:32.406 [2024-11-07 10:37:59.875993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:32.406 [2024-11-07 10:37:59.876003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:32.406 [2024-11-07 10:37:59.876013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:32.406 [2024-11-07 10:37:59.876026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:32.406 passed 00:09:32.406 Test: blockdev nvme passthru rw ...passed 00:09:32.406 Test: blockdev nvme passthru vendor specific ...[2024-11-07 10:37:59.876285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:09:32.406 [2024-11-07 10:37:59.876296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:32.406 [2024-11-07 10:37:59.876342] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:09:32.406 [2024-11-07 10:37:59.876352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:32.406 [2024-11-07 10:37:59.876393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:09:32.406 [2024-11-07 10:37:59.876403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:32.406 [2024-11-07 10:37:59.876444] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:09:32.406 [2024-11-07 10:37:59.876454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:32.406 passed 00:09:32.406 Test: blockdev nvme admin passthru ...passed 00:09:32.406 Test: blockdev copy ...passed 00:09:32.406 00:09:32.406 Run Summary: Type Total Ran Passed Failed Inactive 00:09:32.406 suites 1 1 n/a 0 0 00:09:32.406 tests 23 23 23 0 0 00:09:32.406 asserts 152 152 152 0 n/a 00:09:32.406 00:09:32.406 Elapsed time = 0.171 seconds 00:09:32.406 10:38:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:32.406 10:38:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.406 10:38:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:32.406 10:38:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.406 10:38:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:32.406 10:38:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:32.406 10:38:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:32.406 10:38:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:32.406 10:38:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:32.406 10:38:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:32.406 10:38:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:32.406 10:38:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:32.406 10:38:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:32.406 rmmod nvme_rdma 00:09:32.406 rmmod nvme_fabrics 00:09:32.665 10:38:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:32.665 10:38:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:32.665 10:38:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:32.665 10:38:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3686547 ']' 00:09:32.665 10:38:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3686547 00:09:32.665 10:38:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 3686547 ']' 00:09:32.665 10:38:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 3686547 00:09:32.665 10:38:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:09:32.665 10:38:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:32.665 10:38:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3686547 00:09:32.665 10:38:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:09:32.665 10:38:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:09:32.665 10:38:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3686547' 00:09:32.665 killing process with pid 3686547 00:09:32.665 10:38:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 3686547 00:09:32.665 10:38:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 3686547 00:09:32.924 10:38:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:32.924 10:38:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:32.924 00:09:32.924 real 0m8.193s 00:09:32.924 user 0m7.965s 00:09:32.924 sys 0m5.542s 00:09:32.924 10:38:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:32.924 10:38:00 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:32.924 ************************************ 00:09:32.924 END TEST nvmf_bdevio 00:09:32.924 ************************************ 00:09:32.924 10:38:00 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:32.924 00:09:32.924 real 4m7.248s 00:09:32.924 user 10m51.188s 00:09:32.924 sys 1m34.596s 00:09:32.924 10:38:00 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:32.924 10:38:00 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:32.924 ************************************ 00:09:32.924 END TEST nvmf_target_core 00:09:32.924 ************************************ 00:09:32.924 10:38:00 nvmf_rdma -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:09:32.924 10:38:00 nvmf_rdma -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:32.924 10:38:00 nvmf_rdma -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:32.924 10:38:00 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:32.924 ************************************ 00:09:32.924 START TEST nvmf_target_extra 00:09:32.924 ************************************ 00:09:32.924 10:38:00 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:09:32.924 * Looking for test storage... 00:09:33.183 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:33.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.184 --rc genhtml_branch_coverage=1 00:09:33.184 --rc genhtml_function_coverage=1 00:09:33.184 --rc genhtml_legend=1 00:09:33.184 --rc geninfo_all_blocks=1 00:09:33.184 --rc geninfo_unexecuted_blocks=1 00:09:33.184 00:09:33.184 ' 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:33.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.184 --rc genhtml_branch_coverage=1 00:09:33.184 --rc genhtml_function_coverage=1 00:09:33.184 --rc genhtml_legend=1 00:09:33.184 --rc geninfo_all_blocks=1 00:09:33.184 --rc geninfo_unexecuted_blocks=1 00:09:33.184 00:09:33.184 ' 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:33.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.184 --rc genhtml_branch_coverage=1 00:09:33.184 --rc genhtml_function_coverage=1 00:09:33.184 --rc genhtml_legend=1 00:09:33.184 --rc geninfo_all_blocks=1 00:09:33.184 --rc geninfo_unexecuted_blocks=1 00:09:33.184 00:09:33.184 ' 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:33.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.184 --rc genhtml_branch_coverage=1 00:09:33.184 --rc genhtml_function_coverage=1 00:09:33.184 --rc genhtml_legend=1 00:09:33.184 --rc geninfo_all_blocks=1 00:09:33.184 --rc geninfo_unexecuted_blocks=1 00:09:33.184 00:09:33.184 ' 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:33.184 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:33.184 10:38:00 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:09:33.185 10:38:00 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:33.185 10:38:00 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:33.185 10:38:00 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:33.185 ************************************ 00:09:33.185 START TEST nvmf_example 00:09:33.185 ************************************ 00:09:33.185 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:09:33.185 * Looking for test storage... 00:09:33.185 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:33.185 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:33.185 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:09:33.185 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:33.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.444 --rc genhtml_branch_coverage=1 00:09:33.444 --rc genhtml_function_coverage=1 00:09:33.444 --rc genhtml_legend=1 00:09:33.444 --rc geninfo_all_blocks=1 00:09:33.444 --rc geninfo_unexecuted_blocks=1 00:09:33.444 00:09:33.444 ' 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:33.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.444 --rc genhtml_branch_coverage=1 00:09:33.444 --rc genhtml_function_coverage=1 00:09:33.444 --rc genhtml_legend=1 00:09:33.444 --rc geninfo_all_blocks=1 00:09:33.444 --rc geninfo_unexecuted_blocks=1 00:09:33.444 00:09:33.444 ' 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:33.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.444 --rc genhtml_branch_coverage=1 00:09:33.444 --rc genhtml_function_coverage=1 00:09:33.444 --rc genhtml_legend=1 00:09:33.444 --rc geninfo_all_blocks=1 00:09:33.444 --rc geninfo_unexecuted_blocks=1 00:09:33.444 00:09:33.444 ' 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:33.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.444 --rc genhtml_branch_coverage=1 00:09:33.444 --rc genhtml_function_coverage=1 00:09:33.444 --rc genhtml_legend=1 00:09:33.444 --rc geninfo_all_blocks=1 00:09:33.444 --rc geninfo_unexecuted_blocks=1 00:09:33.444 00:09:33.444 ' 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:33.444 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:33.444 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:33.445 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:33.445 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:33.445 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:33.445 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:33.445 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:33.445 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:33.445 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:33.445 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:33.445 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:33.445 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:33.445 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:33.445 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:33.445 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:33.445 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:33.445 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.445 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:33.445 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.445 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:33.445 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:33.445 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:09:33.445 10:38:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:40.010 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:40.010 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:40.010 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:40.010 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:40.011 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:40.011 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:40.011 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:40.011 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:40.011 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:40.011 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:40.011 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:40.011 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:40.011 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:40.011 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:40.011 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:09:40.011 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:40.011 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:40.011 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:40.011 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # rdma_device_init 00:09:40.011 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:40.011 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # uname 00:09:40.011 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:40.011 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:40.011 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:40.011 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:40.011 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:40.011 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:40.011 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:40.011 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:40.011 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:40.011 10:38:06 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:40.011 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:40.011 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:40.011 altname enp217s0f0np0 00:09:40.011 altname ens818f0np0 00:09:40.011 inet 192.168.100.8/24 scope global mlx_0_0 00:09:40.011 valid_lft forever preferred_lft forever 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:40.011 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:40.011 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:40.011 altname enp217s0f1np1 00:09:40.011 altname ens818f1np1 00:09:40.011 inet 192.168.100.9/24 scope global mlx_0_1 00:09:40.011 valid_lft forever preferred_lft forever 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:40.011 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:40.012 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:40.012 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:40.012 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:40.012 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:40.012 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:40.012 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:40.012 192.168.100.9' 00:09:40.012 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:40.012 192.168.100.9' 00:09:40.012 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # head -n 1 00:09:40.012 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:40.012 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:40.012 192.168.100.9' 00:09:40.012 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # tail -n +2 00:09:40.012 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # head -n 1 00:09:40.012 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:40.012 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:40.012 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:40.012 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:40.012 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:40.012 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:40.012 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:40.012 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:40.012 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:40.012 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:40.012 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:09:40.012 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3690143 00:09:40.012 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:40.012 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:40.012 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3690143 00:09:40.012 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # '[' -z 3690143 ']' 00:09:40.012 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.012 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:40.012 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.012 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:40.012 10:38:07 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:40.579 10:38:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:40.579 10:38:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@866 -- # return 0 00:09:40.579 10:38:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:40.579 10:38:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:40.579 10:38:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:40.579 10:38:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:40.579 10:38:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.579 10:38:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:40.837 10:38:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.837 10:38:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:40.837 10:38:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.837 10:38:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:40.837 10:38:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.837 10:38:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:40.837 10:38:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:40.837 10:38:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.837 10:38:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:40.837 10:38:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.837 10:38:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:40.837 10:38:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:40.837 10:38:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.837 10:38:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:40.837 10:38:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.837 10:38:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:40.837 10:38:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.837 10:38:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:40.837 10:38:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.837 10:38:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:40.837 10:38:08 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:53.044 Initializing NVMe Controllers 00:09:53.044 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:09:53.044 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:53.044 Initialization complete. Launching workers. 00:09:53.044 ======================================================== 00:09:53.044 Latency(us) 00:09:53.044 Device Information : IOPS MiB/s Average min max 00:09:53.044 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 26907.50 105.11 2378.47 622.82 15005.44 00:09:53.044 ======================================================== 00:09:53.044 Total : 26907.50 105.11 2378.47 622.82 15005.44 00:09:53.044 00:09:53.044 10:38:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:53.044 10:38:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:53.044 10:38:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:53.044 10:38:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:09:53.044 10:38:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:53.044 10:38:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:53.044 10:38:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:09:53.044 10:38:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:53.044 10:38:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:53.044 rmmod nvme_rdma 00:09:53.044 rmmod nvme_fabrics 00:09:53.044 10:38:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:53.044 10:38:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:09:53.044 10:38:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:09:53.044 10:38:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 3690143 ']' 00:09:53.044 10:38:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 3690143 00:09:53.044 10:38:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # '[' -z 3690143 ']' 00:09:53.044 10:38:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # kill -0 3690143 00:09:53.044 10:38:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # uname 00:09:53.044 10:38:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:53.044 10:38:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3690143 00:09:53.044 10:38:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # process_name=nvmf 00:09:53.044 10:38:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@962 -- # '[' nvmf = sudo ']' 00:09:53.044 10:38:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3690143' 00:09:53.044 killing process with pid 3690143 00:09:53.044 10:38:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@971 -- # kill 3690143 00:09:53.044 10:38:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@976 -- # wait 3690143 00:09:53.044 nvmf threads initialize successfully 00:09:53.044 bdev subsystem init successfully 00:09:53.044 created a nvmf target service 00:09:53.044 create targets's poll groups done 00:09:53.044 all subsystems of target started 00:09:53.044 nvmf target is running 00:09:53.044 all subsystems of target stopped 00:09:53.044 destroy targets's poll groups done 00:09:53.044 destroyed the nvmf target service 00:09:53.044 bdev subsystem finish successfully 00:09:53.044 nvmf threads destroy successfully 00:09:53.044 10:38:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:53.044 10:38:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:53.044 10:38:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:53.044 10:38:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:53.044 10:38:19 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:53.044 00:09:53.044 real 0m19.272s 00:09:53.044 user 0m52.046s 00:09:53.044 sys 0m5.358s 00:09:53.044 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:53.044 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:53.044 ************************************ 00:09:53.044 END TEST nvmf_example 00:09:53.044 ************************************ 00:09:53.044 10:38:20 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:09:53.044 10:38:20 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:53.044 10:38:20 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:53.044 10:38:20 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:53.044 ************************************ 00:09:53.044 START TEST nvmf_filesystem 00:09:53.044 ************************************ 00:09:53.044 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:09:53.044 * Looking for test storage... 00:09:53.044 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:53.044 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:53.044 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:09:53.044 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:53.044 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:53.044 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:53.044 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:53.044 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:53.044 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:53.044 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:53.044 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:53.044 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:53.044 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:53.044 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:53.044 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:53.044 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:53.044 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:53.044 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:53.044 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:53.044 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:53.044 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:53.044 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:53.044 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:53.044 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:53.044 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:53.044 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:53.044 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:53.044 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:53.044 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:53.044 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:53.044 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:53.044 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:53.044 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:53.044 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:53.044 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:53.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.044 --rc genhtml_branch_coverage=1 00:09:53.044 --rc genhtml_function_coverage=1 00:09:53.044 --rc genhtml_legend=1 00:09:53.044 --rc geninfo_all_blocks=1 00:09:53.044 --rc geninfo_unexecuted_blocks=1 00:09:53.044 00:09:53.044 ' 00:09:53.044 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:53.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.044 --rc genhtml_branch_coverage=1 00:09:53.044 --rc genhtml_function_coverage=1 00:09:53.044 --rc genhtml_legend=1 00:09:53.044 --rc geninfo_all_blocks=1 00:09:53.044 --rc geninfo_unexecuted_blocks=1 00:09:53.044 00:09:53.044 ' 00:09:53.044 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:53.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.045 --rc genhtml_branch_coverage=1 00:09:53.045 --rc genhtml_function_coverage=1 00:09:53.045 --rc genhtml_legend=1 00:09:53.045 --rc geninfo_all_blocks=1 00:09:53.045 --rc geninfo_unexecuted_blocks=1 00:09:53.045 00:09:53.045 ' 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:53.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.045 --rc genhtml_branch_coverage=1 00:09:53.045 --rc genhtml_function_coverage=1 00:09:53.045 --rc genhtml_legend=1 00:09:53.045 --rc geninfo_all_blocks=1 00:09:53.045 --rc geninfo_unexecuted_blocks=1 00:09:53.045 00:09:53.045 ' 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:53.045 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:09:53.046 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:09:53.046 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:09:53.046 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:09:53.046 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:09:53.046 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:09:53.046 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:09:53.046 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:09:53.046 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:09:53.046 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:09:53.046 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:09:53.046 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:53.046 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:53.046 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:53.046 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:53.046 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:53.046 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:53.046 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:09:53.046 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:53.046 #define SPDK_CONFIG_H 00:09:53.046 #define SPDK_CONFIG_AIO_FSDEV 1 00:09:53.046 #define SPDK_CONFIG_APPS 1 00:09:53.046 #define SPDK_CONFIG_ARCH native 00:09:53.046 #undef SPDK_CONFIG_ASAN 00:09:53.046 #undef SPDK_CONFIG_AVAHI 00:09:53.046 #undef SPDK_CONFIG_CET 00:09:53.046 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:09:53.046 #define SPDK_CONFIG_COVERAGE 1 00:09:53.046 #define SPDK_CONFIG_CROSS_PREFIX 00:09:53.046 #undef SPDK_CONFIG_CRYPTO 00:09:53.046 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:53.046 #undef SPDK_CONFIG_CUSTOMOCF 00:09:53.046 #undef SPDK_CONFIG_DAOS 00:09:53.046 #define SPDK_CONFIG_DAOS_DIR 00:09:53.046 #define SPDK_CONFIG_DEBUG 1 00:09:53.046 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:53.046 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:09:53.046 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:53.046 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:53.046 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:53.046 #undef SPDK_CONFIG_DPDK_UADK 00:09:53.046 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:09:53.046 #define SPDK_CONFIG_EXAMPLES 1 00:09:53.046 #undef SPDK_CONFIG_FC 00:09:53.046 #define SPDK_CONFIG_FC_PATH 00:09:53.046 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:53.046 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:53.046 #define SPDK_CONFIG_FSDEV 1 00:09:53.046 #undef SPDK_CONFIG_FUSE 00:09:53.046 #undef SPDK_CONFIG_FUZZER 00:09:53.046 #define SPDK_CONFIG_FUZZER_LIB 00:09:53.046 #undef SPDK_CONFIG_GOLANG 00:09:53.046 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:53.046 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:53.046 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:53.046 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:53.046 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:53.046 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:53.046 #undef SPDK_CONFIG_HAVE_LZ4 00:09:53.046 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:09:53.046 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:09:53.046 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:53.046 #define SPDK_CONFIG_IDXD 1 00:09:53.046 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:53.046 #undef SPDK_CONFIG_IPSEC_MB 00:09:53.046 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:53.046 #define SPDK_CONFIG_ISAL 1 00:09:53.046 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:53.046 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:53.046 #define SPDK_CONFIG_LIBDIR 00:09:53.046 #undef SPDK_CONFIG_LTO 00:09:53.046 #define SPDK_CONFIG_MAX_LCORES 128 00:09:53.046 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:09:53.046 #define SPDK_CONFIG_NVME_CUSE 1 00:09:53.046 #undef SPDK_CONFIG_OCF 00:09:53.046 #define SPDK_CONFIG_OCF_PATH 00:09:53.046 #define SPDK_CONFIG_OPENSSL_PATH 00:09:53.046 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:53.046 #define SPDK_CONFIG_PGO_DIR 00:09:53.046 #undef SPDK_CONFIG_PGO_USE 00:09:53.046 #define SPDK_CONFIG_PREFIX /usr/local 00:09:53.046 #undef SPDK_CONFIG_RAID5F 00:09:53.046 #undef SPDK_CONFIG_RBD 00:09:53.046 #define SPDK_CONFIG_RDMA 1 00:09:53.046 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:53.046 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:53.046 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:53.046 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:53.046 #define SPDK_CONFIG_SHARED 1 00:09:53.046 #undef SPDK_CONFIG_SMA 00:09:53.046 #define SPDK_CONFIG_TESTS 1 00:09:53.046 #undef SPDK_CONFIG_TSAN 00:09:53.046 #define SPDK_CONFIG_UBLK 1 00:09:53.046 #define SPDK_CONFIG_UBSAN 1 00:09:53.046 #undef SPDK_CONFIG_UNIT_TESTS 00:09:53.046 #undef SPDK_CONFIG_URING 00:09:53.046 #define SPDK_CONFIG_URING_PATH 00:09:53.046 #undef SPDK_CONFIG_URING_ZNS 00:09:53.046 #undef SPDK_CONFIG_USDT 00:09:53.046 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:53.046 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:53.046 #undef SPDK_CONFIG_VFIO_USER 00:09:53.046 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:53.046 #define SPDK_CONFIG_VHOST 1 00:09:53.046 #define SPDK_CONFIG_VIRTIO 1 00:09:53.046 #undef SPDK_CONFIG_VTUNE 00:09:53.046 #define SPDK_CONFIG_VTUNE_DIR 00:09:53.046 #define SPDK_CONFIG_WERROR 1 00:09:53.046 #define SPDK_CONFIG_WPDK_DIR 00:09:53.046 #undef SPDK_CONFIG_XNVME 00:09:53.046 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:53.046 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:53.046 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:53.046 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:53.046 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.046 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.046 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.046 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.046 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.046 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.046 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:53.046 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.046 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:09:53.046 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:09:53.046 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:09:53.046 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:09:53.046 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:53.046 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:09:53.046 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:53.047 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : mlx5 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:09:53.048 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j112 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=rdma 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 3692296 ]] 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 3692296 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.RwEzhM 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.RwEzhM/tests/target /tmp/spdk.RwEzhM 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=55004848128 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=61730603008 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=6725754880 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=30804680704 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30865301504 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=60620800 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12323033088 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12346122240 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23089152 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=30864515072 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30865301504 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=786432 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=6173044736 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=6173057024 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:09:53.049 * Looking for test storage... 00:09:53.049 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=55004848128 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=8940347392 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:53.050 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:53.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.050 --rc genhtml_branch_coverage=1 00:09:53.050 --rc genhtml_function_coverage=1 00:09:53.050 --rc genhtml_legend=1 00:09:53.050 --rc geninfo_all_blocks=1 00:09:53.050 --rc geninfo_unexecuted_blocks=1 00:09:53.050 00:09:53.050 ' 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:53.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.050 --rc genhtml_branch_coverage=1 00:09:53.050 --rc genhtml_function_coverage=1 00:09:53.050 --rc genhtml_legend=1 00:09:53.050 --rc geninfo_all_blocks=1 00:09:53.050 --rc geninfo_unexecuted_blocks=1 00:09:53.050 00:09:53.050 ' 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:53.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.050 --rc genhtml_branch_coverage=1 00:09:53.050 --rc genhtml_function_coverage=1 00:09:53.050 --rc genhtml_legend=1 00:09:53.050 --rc geninfo_all_blocks=1 00:09:53.050 --rc geninfo_unexecuted_blocks=1 00:09:53.050 00:09:53.050 ' 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:53.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.050 --rc genhtml_branch_coverage=1 00:09:53.050 --rc genhtml_function_coverage=1 00:09:53.050 --rc genhtml_legend=1 00:09:53.050 --rc geninfo_all_blocks=1 00:09:53.050 --rc geninfo_unexecuted_blocks=1 00:09:53.050 00:09:53.050 ' 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.050 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.051 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.051 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.051 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.051 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.051 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:53.051 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.051 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:09:53.051 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:53.051 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:53.051 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:53.051 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:53.051 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:53.051 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:53.051 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:53.051 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:53.051 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:53.051 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:53.051 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:53.051 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:53.051 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:53.051 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:53.051 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:53.051 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:53.051 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:53.051 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:53.051 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.051 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:53.051 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.051 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:53.051 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:53.051 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:09:53.051 10:38:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:59.621 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:59.621 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:59.621 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.621 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:59.622 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # rdma_device_init 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # uname 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:59.622 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:59.622 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:59.622 altname enp217s0f0np0 00:09:59.622 altname ens818f0np0 00:09:59.622 inet 192.168.100.8/24 scope global mlx_0_0 00:09:59.622 valid_lft forever preferred_lft forever 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:59.622 10:38:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:59.622 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:59.622 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:59.622 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:59.622 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:59.622 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:59.622 altname enp217s0f1np1 00:09:59.622 altname ens818f1np1 00:09:59.622 inet 192.168.100.9/24 scope global mlx_0_1 00:09:59.622 valid_lft forever preferred_lft forever 00:09:59.622 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:09:59.622 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:59.622 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:59.622 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:59.622 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:59.622 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:59.622 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:59.622 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:59.622 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:59.622 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:59.622 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:59.622 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:59.622 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:59.622 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:59.622 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:59.622 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:09:59.622 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:59.622 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:59.622 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:59.622 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:59.622 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:59.622 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:59.622 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:09:59.622 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:59.622 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:59.622 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:59.622 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:59.622 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:59.622 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:59.622 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:59.622 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:59.622 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:59.622 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:59.622 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:59.622 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:59.622 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:59.622 192.168.100.9' 00:09:59.622 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:59.622 192.168.100.9' 00:09:59.622 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # head -n 1 00:09:59.622 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:59.622 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:59.622 192.168.100.9' 00:09:59.622 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # tail -n +2 00:09:59.622 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # head -n 1 00:09:59.622 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:59.623 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:59.623 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:59.623 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:59.623 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:59.623 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:59.623 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:59.623 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:59.623 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:59.623 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:59.623 ************************************ 00:09:59.623 START TEST nvmf_filesystem_no_in_capsule 00:09:59.623 ************************************ 00:09:59.623 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 0 00:09:59.623 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:59.623 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:59.623 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:59.623 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:59.623 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.623 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3695569 00:09:59.623 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3695569 00:09:59.623 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:59.623 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 3695569 ']' 00:09:59.623 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.623 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:59.623 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.623 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:59.623 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.623 [2024-11-07 10:38:27.199412] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:09:59.623 [2024-11-07 10:38:27.199473] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.623 [2024-11-07 10:38:27.277226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:59.882 [2024-11-07 10:38:27.319141] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:59.882 [2024-11-07 10:38:27.319179] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:59.882 [2024-11-07 10:38:27.319188] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:59.882 [2024-11-07 10:38:27.319197] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:59.882 [2024-11-07 10:38:27.319203] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:59.882 [2024-11-07 10:38:27.320977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.882 [2024-11-07 10:38:27.321071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:59.883 [2024-11-07 10:38:27.321165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:59.883 [2024-11-07 10:38:27.321168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.883 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:59.883 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:09:59.883 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:59.883 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:59.883 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.883 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:59.883 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:59.883 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:09:59.883 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.883 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.883 [2024-11-07 10:38:27.465941] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:09:59.883 [2024-11-07 10:38:27.487722] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x21fedf0/0x22032e0) succeed. 00:09:59.883 [2024-11-07 10:38:27.497059] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2200480/0x2244980) succeed. 00:10:00.143 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.143 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:00.143 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.143 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:00.143 Malloc1 00:10:00.143 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.143 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:00.143 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.143 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:00.143 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.143 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:00.143 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.143 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:00.143 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.143 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:00.143 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.143 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:00.143 [2024-11-07 10:38:27.750918] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:00.143 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.143 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:00.143 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:10:00.143 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:10:00.143 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:10:00.143 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:10:00.143 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:00.143 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.143 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:00.143 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.143 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:10:00.143 { 00:10:00.143 "name": "Malloc1", 00:10:00.143 "aliases": [ 00:10:00.143 "918b466d-ce98-48f8-be85-b97ebe91d168" 00:10:00.143 ], 00:10:00.143 "product_name": "Malloc disk", 00:10:00.143 "block_size": 512, 00:10:00.143 "num_blocks": 1048576, 00:10:00.143 "uuid": "918b466d-ce98-48f8-be85-b97ebe91d168", 00:10:00.143 "assigned_rate_limits": { 00:10:00.143 "rw_ios_per_sec": 0, 00:10:00.143 "rw_mbytes_per_sec": 0, 00:10:00.143 "r_mbytes_per_sec": 0, 00:10:00.143 "w_mbytes_per_sec": 0 00:10:00.143 }, 00:10:00.143 "claimed": true, 00:10:00.143 "claim_type": "exclusive_write", 00:10:00.143 "zoned": false, 00:10:00.143 "supported_io_types": { 00:10:00.143 "read": true, 00:10:00.143 "write": true, 00:10:00.143 "unmap": true, 00:10:00.143 "flush": true, 00:10:00.143 "reset": true, 00:10:00.144 "nvme_admin": false, 00:10:00.144 "nvme_io": false, 00:10:00.144 "nvme_io_md": false, 00:10:00.144 "write_zeroes": true, 00:10:00.144 "zcopy": true, 00:10:00.144 "get_zone_info": false, 00:10:00.144 "zone_management": false, 00:10:00.144 "zone_append": false, 00:10:00.144 "compare": false, 00:10:00.144 "compare_and_write": false, 00:10:00.144 "abort": true, 00:10:00.144 "seek_hole": false, 00:10:00.144 "seek_data": false, 00:10:00.144 "copy": true, 00:10:00.144 "nvme_iov_md": false 00:10:00.144 }, 00:10:00.144 "memory_domains": [ 00:10:00.144 { 00:10:00.144 "dma_device_id": "system", 00:10:00.144 "dma_device_type": 1 00:10:00.144 }, 00:10:00.144 { 00:10:00.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.144 "dma_device_type": 2 00:10:00.144 } 00:10:00.144 ], 00:10:00.144 "driver_specific": {} 00:10:00.144 } 00:10:00.144 ]' 00:10:00.144 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:10:00.403 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:10:00.403 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:10:00.403 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:10:00.403 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:10:00.403 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:10:00.403 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:00.403 10:38:27 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:01.341 10:38:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:01.341 10:38:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:10:01.341 10:38:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:01.341 10:38:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:01.341 10:38:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:10:03.245 10:38:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:03.246 10:38:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:03.246 10:38:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:03.246 10:38:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:10:03.246 10:38:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:03.246 10:38:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:10:03.246 10:38:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:03.246 10:38:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:03.246 10:38:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:03.246 10:38:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:03.246 10:38:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:03.246 10:38:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:03.246 10:38:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:03.246 10:38:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:03.246 10:38:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:03.504 10:38:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:03.504 10:38:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:03.504 10:38:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:03.504 10:38:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:04.440 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:04.441 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:04.441 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:04.441 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:04.441 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.441 ************************************ 00:10:04.441 START TEST filesystem_ext4 00:10:04.441 ************************************ 00:10:04.441 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:04.441 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:04.441 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:04.441 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:04.441 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:10:04.441 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:04.441 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:10:04.441 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local force 00:10:04.441 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:10:04.441 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:10:04.441 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:04.441 mke2fs 1.47.0 (5-Feb-2023) 00:10:04.700 Discarding device blocks: 0/522240 done 00:10:04.700 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:04.700 Filesystem UUID: c26d6cf6-df7e-4b2f-b960-fce7ee83a949 00:10:04.700 Superblock backups stored on blocks: 00:10:04.700 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:04.700 00:10:04.700 Allocating group tables: 0/64 done 00:10:04.700 Writing inode tables: 0/64 done 00:10:04.700 Creating journal (8192 blocks): done 00:10:04.700 Writing superblocks and filesystem accounting information: 0/64 done 00:10:04.700 00:10:04.700 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@947 -- # return 0 00:10:04.700 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:04.700 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:04.700 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:04.700 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:04.700 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:04.700 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:04.700 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:04.700 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3695569 00:10:04.700 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:04.700 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:04.700 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:04.700 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:04.700 00:10:04.700 real 0m0.205s 00:10:04.700 user 0m0.031s 00:10:04.700 sys 0m0.077s 00:10:04.700 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:04.700 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:04.700 ************************************ 00:10:04.700 END TEST filesystem_ext4 00:10:04.700 ************************************ 00:10:04.700 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:04.700 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:04.700 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:04.700 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.700 ************************************ 00:10:04.700 START TEST filesystem_btrfs 00:10:04.700 ************************************ 00:10:04.700 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:04.700 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:04.700 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:04.700 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:04.700 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:10:04.700 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:04.701 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:10:04.701 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local force 00:10:04.701 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:10:04.701 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:10:04.701 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:04.960 btrfs-progs v6.8.1 00:10:04.960 See https://btrfs.readthedocs.io for more information. 00:10:04.960 00:10:04.960 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:04.960 NOTE: several default settings have changed in version 5.15, please make sure 00:10:04.960 this does not affect your deployments: 00:10:04.960 - DUP for metadata (-m dup) 00:10:04.960 - enabled no-holes (-O no-holes) 00:10:04.960 - enabled free-space-tree (-R free-space-tree) 00:10:04.960 00:10:04.960 Label: (null) 00:10:04.960 UUID: 70b3d6fb-dc29-4594-912f-9960088e6d74 00:10:04.960 Node size: 16384 00:10:04.960 Sector size: 4096 (CPU page size: 4096) 00:10:04.960 Filesystem size: 510.00MiB 00:10:04.960 Block group profiles: 00:10:04.960 Data: single 8.00MiB 00:10:04.960 Metadata: DUP 32.00MiB 00:10:04.960 System: DUP 8.00MiB 00:10:04.960 SSD detected: yes 00:10:04.960 Zoned device: no 00:10:04.960 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:04.960 Checksum: crc32c 00:10:04.960 Number of devices: 1 00:10:04.960 Devices: 00:10:04.960 ID SIZE PATH 00:10:04.960 1 510.00MiB /dev/nvme0n1p1 00:10:04.960 00:10:04.960 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@947 -- # return 0 00:10:04.960 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:04.960 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:04.960 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:04.960 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:04.960 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:04.960 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:04.960 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:04.960 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3695569 00:10:04.960 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:04.960 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:04.960 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:04.960 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:04.960 00:10:04.960 real 0m0.249s 00:10:04.960 user 0m0.037s 00:10:04.960 sys 0m0.119s 00:10:04.960 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:04.960 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:04.960 ************************************ 00:10:04.960 END TEST filesystem_btrfs 00:10:04.960 ************************************ 00:10:05.219 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:05.219 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:05.219 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:05.219 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:05.219 ************************************ 00:10:05.219 START TEST filesystem_xfs 00:10:05.219 ************************************ 00:10:05.219 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:10:05.219 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:05.219 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:05.219 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:05.219 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:10:05.219 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:05.219 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local i=0 00:10:05.220 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local force 00:10:05.220 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:10:05.220 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # force=-f 00:10:05.220 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:05.220 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:05.220 = sectsz=512 attr=2, projid32bit=1 00:10:05.220 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:05.220 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:05.220 data = bsize=4096 blocks=130560, imaxpct=25 00:10:05.220 = sunit=0 swidth=0 blks 00:10:05.220 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:05.220 log =internal log bsize=4096 blocks=16384, version=2 00:10:05.220 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:05.220 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:05.220 Discarding blocks...Done. 00:10:05.220 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@947 -- # return 0 00:10:05.220 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:05.220 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:05.220 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:05.220 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:05.220 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:05.220 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:05.220 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:05.220 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3695569 00:10:05.220 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:05.220 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:05.220 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:05.220 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:05.220 00:10:05.220 real 0m0.213s 00:10:05.220 user 0m0.028s 00:10:05.220 sys 0m0.079s 00:10:05.220 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:05.220 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:05.220 ************************************ 00:10:05.220 END TEST filesystem_xfs 00:10:05.220 ************************************ 00:10:05.478 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:05.478 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:05.478 10:38:32 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:06.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.416 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:06.416 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:10:06.416 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:06.416 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:06.416 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:06.416 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:06.416 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:10:06.416 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:06.416 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.416 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:06.416 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.416 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:06.416 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3695569 00:10:06.416 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 3695569 ']' 00:10:06.416 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # kill -0 3695569 00:10:06.416 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # uname 00:10:06.416 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:06.416 10:38:33 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3695569 00:10:06.416 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:06.416 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:06.416 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3695569' 00:10:06.416 killing process with pid 3695569 00:10:06.416 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # kill 3695569 00:10:06.416 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@976 -- # wait 3695569 00:10:06.985 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:06.985 00:10:06.985 real 0m7.228s 00:10:06.985 user 0m28.127s 00:10:06.985 sys 0m1.148s 00:10:06.985 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:06.985 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:06.985 ************************************ 00:10:06.985 END TEST nvmf_filesystem_no_in_capsule 00:10:06.985 ************************************ 00:10:06.985 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:06.985 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:06.985 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:06.985 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:06.985 ************************************ 00:10:06.985 START TEST nvmf_filesystem_in_capsule 00:10:06.985 ************************************ 00:10:06.985 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 4096 00:10:06.985 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:06.985 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:06.985 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:06.985 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:06.985 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:06.985 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3696939 00:10:06.985 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3696939 00:10:06.985 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:06.985 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 3696939 ']' 00:10:06.985 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.985 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:06.985 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.985 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:06.985 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:06.985 [2024-11-07 10:38:34.480025] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:10:06.985 [2024-11-07 10:38:34.480079] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:06.985 [2024-11-07 10:38:34.553639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:06.985 [2024-11-07 10:38:34.588567] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:06.985 [2024-11-07 10:38:34.588610] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:06.985 [2024-11-07 10:38:34.588619] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:06.985 [2024-11-07 10:38:34.588628] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:06.985 [2024-11-07 10:38:34.588638] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:06.985 [2024-11-07 10:38:34.590458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:06.985 [2024-11-07 10:38:34.590557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:06.985 [2024-11-07 10:38:34.590610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:06.985 [2024-11-07 10:38:34.590612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.244 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:07.244 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:10:07.244 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:07.244 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:07.244 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:07.244 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:07.244 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:07.244 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:10:07.244 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.244 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:07.244 [2024-11-07 10:38:34.766103] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x603df0/0x6082e0) succeed. 00:10:07.244 [2024-11-07 10:38:34.776547] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x605480/0x649980) succeed. 00:10:07.244 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.244 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:07.244 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.244 10:38:34 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:07.504 Malloc1 00:10:07.504 10:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.504 10:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:07.504 10:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.504 10:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:07.504 10:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.504 10:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:07.504 10:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.504 10:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:07.504 10:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.504 10:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:07.504 10:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.504 10:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:07.504 [2024-11-07 10:38:35.052679] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:07.504 10:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.504 10:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:07.504 10:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:10:07.504 10:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:10:07.504 10:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:10:07.504 10:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:10:07.504 10:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:07.504 10:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.504 10:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:07.504 10:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.504 10:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:10:07.504 { 00:10:07.504 "name": "Malloc1", 00:10:07.504 "aliases": [ 00:10:07.504 "f8544002-d920-4b1d-87a8-b3bf638c1b06" 00:10:07.504 ], 00:10:07.504 "product_name": "Malloc disk", 00:10:07.504 "block_size": 512, 00:10:07.505 "num_blocks": 1048576, 00:10:07.505 "uuid": "f8544002-d920-4b1d-87a8-b3bf638c1b06", 00:10:07.505 "assigned_rate_limits": { 00:10:07.505 "rw_ios_per_sec": 0, 00:10:07.505 "rw_mbytes_per_sec": 0, 00:10:07.505 "r_mbytes_per_sec": 0, 00:10:07.505 "w_mbytes_per_sec": 0 00:10:07.505 }, 00:10:07.505 "claimed": true, 00:10:07.505 "claim_type": "exclusive_write", 00:10:07.505 "zoned": false, 00:10:07.505 "supported_io_types": { 00:10:07.505 "read": true, 00:10:07.505 "write": true, 00:10:07.505 "unmap": true, 00:10:07.505 "flush": true, 00:10:07.505 "reset": true, 00:10:07.505 "nvme_admin": false, 00:10:07.505 "nvme_io": false, 00:10:07.505 "nvme_io_md": false, 00:10:07.505 "write_zeroes": true, 00:10:07.505 "zcopy": true, 00:10:07.505 "get_zone_info": false, 00:10:07.505 "zone_management": false, 00:10:07.505 "zone_append": false, 00:10:07.505 "compare": false, 00:10:07.505 "compare_and_write": false, 00:10:07.505 "abort": true, 00:10:07.505 "seek_hole": false, 00:10:07.505 "seek_data": false, 00:10:07.505 "copy": true, 00:10:07.505 "nvme_iov_md": false 00:10:07.505 }, 00:10:07.505 "memory_domains": [ 00:10:07.505 { 00:10:07.505 "dma_device_id": "system", 00:10:07.505 "dma_device_type": 1 00:10:07.505 }, 00:10:07.505 { 00:10:07.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.505 "dma_device_type": 2 00:10:07.505 } 00:10:07.505 ], 00:10:07.505 "driver_specific": {} 00:10:07.505 } 00:10:07.505 ]' 00:10:07.505 10:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:10:07.505 10:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:10:07.505 10:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:10:07.764 10:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:10:07.764 10:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:10:07.764 10:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:10:07.764 10:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:07.764 10:38:35 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:08.700 10:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:08.700 10:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:10:08.700 10:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:08.700 10:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:08.700 10:38:36 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:10:10.605 10:38:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:10.605 10:38:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:10.605 10:38:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:10.605 10:38:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:10:10.605 10:38:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:10.605 10:38:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:10:10.605 10:38:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:10.605 10:38:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:10.605 10:38:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:10.605 10:38:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:10.605 10:38:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:10.605 10:38:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:10.605 10:38:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:10.605 10:38:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:10.605 10:38:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:10.605 10:38:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:10.605 10:38:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:10.605 10:38:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:10.864 10:38:38 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:11.801 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:11.802 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:11.802 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:11.802 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:11.802 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.802 ************************************ 00:10:11.802 START TEST filesystem_in_capsule_ext4 00:10:11.802 ************************************ 00:10:11.802 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:11.802 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:11.802 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:11.802 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:11.802 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:10:11.802 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:11.802 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:10:11.802 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local force 00:10:11.802 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:10:11.802 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:10:11.802 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:11.802 mke2fs 1.47.0 (5-Feb-2023) 00:10:11.802 Discarding device blocks: 0/522240 done 00:10:11.802 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:11.802 Filesystem UUID: 66fc4b45-d5f8-436c-82ee-b4e2d216567e 00:10:11.802 Superblock backups stored on blocks: 00:10:11.802 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:11.802 00:10:11.802 Allocating group tables: 0/64 done 00:10:11.802 Writing inode tables: 0/64 done 00:10:11.802 Creating journal (8192 blocks): done 00:10:12.066 Writing superblocks and filesystem accounting information: 0/64 done 00:10:12.066 00:10:12.066 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@947 -- # return 0 00:10:12.066 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:12.066 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:12.066 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:12.066 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:12.066 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:12.066 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:12.066 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:12.066 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3696939 00:10:12.066 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:12.066 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:12.066 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:12.066 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:12.066 00:10:12.066 real 0m0.191s 00:10:12.066 user 0m0.029s 00:10:12.066 sys 0m0.072s 00:10:12.066 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:12.066 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:12.066 ************************************ 00:10:12.066 END TEST filesystem_in_capsule_ext4 00:10:12.066 ************************************ 00:10:12.066 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:12.066 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:12.066 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:12.066 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.066 ************************************ 00:10:12.066 START TEST filesystem_in_capsule_btrfs 00:10:12.066 ************************************ 00:10:12.066 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:12.066 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:12.066 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:12.066 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:12.066 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:10:12.066 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:12.066 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:10:12.066 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local force 00:10:12.066 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:10:12.067 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:10:12.067 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:12.067 btrfs-progs v6.8.1 00:10:12.067 See https://btrfs.readthedocs.io for more information. 00:10:12.067 00:10:12.067 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:12.067 NOTE: several default settings have changed in version 5.15, please make sure 00:10:12.067 this does not affect your deployments: 00:10:12.067 - DUP for metadata (-m dup) 00:10:12.067 - enabled no-holes (-O no-holes) 00:10:12.067 - enabled free-space-tree (-R free-space-tree) 00:10:12.067 00:10:12.067 Label: (null) 00:10:12.067 UUID: a466c31f-5df7-4932-8077-f05312ef2d33 00:10:12.067 Node size: 16384 00:10:12.067 Sector size: 4096 (CPU page size: 4096) 00:10:12.067 Filesystem size: 510.00MiB 00:10:12.067 Block group profiles: 00:10:12.067 Data: single 8.00MiB 00:10:12.067 Metadata: DUP 32.00MiB 00:10:12.067 System: DUP 8.00MiB 00:10:12.067 SSD detected: yes 00:10:12.067 Zoned device: no 00:10:12.067 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:12.067 Checksum: crc32c 00:10:12.067 Number of devices: 1 00:10:12.067 Devices: 00:10:12.067 ID SIZE PATH 00:10:12.067 1 510.00MiB /dev/nvme0n1p1 00:10:12.067 00:10:12.067 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@947 -- # return 0 00:10:12.067 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:12.329 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:12.330 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:12.330 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:12.330 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:12.330 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:12.330 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:12.330 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3696939 00:10:12.330 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:12.330 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:12.330 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:12.330 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:12.330 00:10:12.330 real 0m0.244s 00:10:12.330 user 0m0.034s 00:10:12.330 sys 0m0.121s 00:10:12.330 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:12.330 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:12.330 ************************************ 00:10:12.330 END TEST filesystem_in_capsule_btrfs 00:10:12.330 ************************************ 00:10:12.330 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:12.330 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:12.330 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:12.330 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.330 ************************************ 00:10:12.330 START TEST filesystem_in_capsule_xfs 00:10:12.330 ************************************ 00:10:12.330 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:10:12.330 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:12.330 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:12.330 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:12.330 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:10:12.330 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:12.330 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local i=0 00:10:12.330 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local force 00:10:12.330 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:10:12.330 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # force=-f 00:10:12.330 10:38:39 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:12.589 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:12.589 = sectsz=512 attr=2, projid32bit=1 00:10:12.589 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:12.589 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:12.589 data = bsize=4096 blocks=130560, imaxpct=25 00:10:12.589 = sunit=0 swidth=0 blks 00:10:12.589 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:12.589 log =internal log bsize=4096 blocks=16384, version=2 00:10:12.589 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:12.589 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:12.589 Discarding blocks...Done. 00:10:12.589 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@947 -- # return 0 00:10:12.589 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:12.589 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:12.589 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:12.589 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:12.589 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:12.589 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:12.589 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:12.589 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3696939 00:10:12.589 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:12.589 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:12.589 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:12.589 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:12.589 00:10:12.589 real 0m0.209s 00:10:12.589 user 0m0.022s 00:10:12.589 sys 0m0.085s 00:10:12.589 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:12.589 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:12.589 ************************************ 00:10:12.589 END TEST filesystem_in_capsule_xfs 00:10:12.589 ************************************ 00:10:12.589 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:12.589 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:12.589 10:38:40 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:13.526 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.526 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:13.526 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:10:13.526 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:13.526 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:13.526 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:13.526 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:13.526 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:10:13.526 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:13.526 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.526 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.786 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.786 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:13.786 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3696939 00:10:13.786 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 3696939 ']' 00:10:13.786 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # kill -0 3696939 00:10:13.786 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # uname 00:10:13.786 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:13.786 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3696939 00:10:13.786 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:13.786 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:13.786 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3696939' 00:10:13.786 killing process with pid 3696939 00:10:13.786 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # kill 3696939 00:10:13.786 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@976 -- # wait 3696939 00:10:14.045 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:14.045 00:10:14.045 real 0m7.235s 00:10:14.045 user 0m28.140s 00:10:14.045 sys 0m1.161s 00:10:14.045 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:14.045 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:14.045 ************************************ 00:10:14.045 END TEST nvmf_filesystem_in_capsule 00:10:14.045 ************************************ 00:10:14.045 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:14.045 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:14.045 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:10:14.045 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:14.045 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:14.046 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:10:14.046 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:14.046 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:14.305 rmmod nvme_rdma 00:10:14.305 rmmod nvme_fabrics 00:10:14.305 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:14.305 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:10:14.305 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:10:14.305 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:14.305 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:14.305 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:14.305 00:10:14.305 real 0m21.706s 00:10:14.305 user 0m58.350s 00:10:14.305 sys 0m7.695s 00:10:14.305 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:14.305 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:14.305 ************************************ 00:10:14.305 END TEST nvmf_filesystem 00:10:14.305 ************************************ 00:10:14.305 10:38:41 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:10:14.305 10:38:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:14.305 10:38:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:14.305 10:38:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:14.305 ************************************ 00:10:14.305 START TEST nvmf_target_discovery 00:10:14.305 ************************************ 00:10:14.305 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:10:14.305 * Looking for test storage... 00:10:14.305 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:14.305 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:14.305 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:10:14.305 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:14.565 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:14.565 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:14.565 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:14.565 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:14.565 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:10:14.565 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:10:14.565 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:10:14.565 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:10:14.565 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:10:14.565 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:10:14.565 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:10:14.565 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:14.565 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:10:14.565 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:10:14.565 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:14.565 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:14.565 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:10:14.565 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:10:14.565 10:38:41 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:14.565 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:10:14.565 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:10:14.565 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:10:14.565 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:10:14.565 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:14.565 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:10:14.565 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:10:14.565 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:14.565 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:14.565 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:10:14.565 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:14.565 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:14.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.565 --rc genhtml_branch_coverage=1 00:10:14.565 --rc genhtml_function_coverage=1 00:10:14.565 --rc genhtml_legend=1 00:10:14.565 --rc geninfo_all_blocks=1 00:10:14.565 --rc geninfo_unexecuted_blocks=1 00:10:14.565 00:10:14.565 ' 00:10:14.565 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:14.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.565 --rc genhtml_branch_coverage=1 00:10:14.565 --rc genhtml_function_coverage=1 00:10:14.565 --rc genhtml_legend=1 00:10:14.565 --rc geninfo_all_blocks=1 00:10:14.565 --rc geninfo_unexecuted_blocks=1 00:10:14.565 00:10:14.565 ' 00:10:14.565 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:14.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.565 --rc genhtml_branch_coverage=1 00:10:14.565 --rc genhtml_function_coverage=1 00:10:14.565 --rc genhtml_legend=1 00:10:14.565 --rc geninfo_all_blocks=1 00:10:14.565 --rc geninfo_unexecuted_blocks=1 00:10:14.565 00:10:14.565 ' 00:10:14.565 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:14.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.565 --rc genhtml_branch_coverage=1 00:10:14.565 --rc genhtml_function_coverage=1 00:10:14.565 --rc genhtml_legend=1 00:10:14.565 --rc geninfo_all_blocks=1 00:10:14.565 --rc geninfo_unexecuted_blocks=1 00:10:14.565 00:10:14.565 ' 00:10:14.565 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:14.565 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:14.565 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:14.565 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:14.565 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:14.565 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:14.565 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:14.565 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:14.565 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:14.565 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:14.565 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:14.565 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:14.565 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:14.565 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:14.565 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:14.565 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:14.565 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:14.565 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:14.565 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:14.565 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:10:14.565 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:14.565 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:14.565 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:14.565 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.565 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.565 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.565 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:14.565 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.565 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:10:14.565 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:14.565 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:14.565 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:14.565 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:14.565 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:14.566 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:14.566 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:14.566 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:14.566 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:14.566 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:14.566 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:14.566 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:14.566 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:14.566 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:14.566 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:14.566 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:14.566 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:14.566 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:14.566 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:14.566 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:14.566 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.566 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:14.566 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.566 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:14.566 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:14.566 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:10:14.566 10:38:42 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:21.254 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:21.254 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:21.254 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:21.254 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:21.255 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.255 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:21.255 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:21.255 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.255 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:21.255 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:10:21.255 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:21.255 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:21.255 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:21.255 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # rdma_device_init 00:10:21.255 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:21.255 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # uname 00:10:21.255 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:21.255 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:21.255 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:21.255 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:21.255 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:21.514 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:21.514 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:21.514 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:21.514 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:21.514 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:21.514 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:21.514 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:21.514 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:21.514 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:21.514 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:21.514 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:21.514 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:21.514 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:21.514 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:21.514 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:21.514 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:10:21.514 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:21.514 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:21.514 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:21.514 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:21.514 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:21.514 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:21.514 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:10:21.514 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:21.514 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:21.515 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:21.515 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:21.515 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:21.515 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:21.515 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:21.515 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:21.515 10:38:48 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:21.515 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:21.515 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:21.515 altname enp217s0f0np0 00:10:21.515 altname ens818f0np0 00:10:21.515 inet 192.168.100.8/24 scope global mlx_0_0 00:10:21.515 valid_lft forever preferred_lft forever 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:21.515 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:21.515 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:21.515 altname enp217s0f1np1 00:10:21.515 altname ens818f1np1 00:10:21.515 inet 192.168.100.9/24 scope global mlx_0_1 00:10:21.515 valid_lft forever preferred_lft forever 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:21.515 192.168.100.9' 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:21.515 192.168.100.9' 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # head -n 1 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:21.515 192.168.100.9' 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # tail -n +2 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # head -n 1 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=3701790 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 3701790 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # '[' -z 3701790 ']' 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.515 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:21.516 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.516 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:21.516 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:21.775 [2024-11-07 10:38:49.193816] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:10:21.775 [2024-11-07 10:38:49.193864] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:21.775 [2024-11-07 10:38:49.267906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:21.775 [2024-11-07 10:38:49.307965] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:21.775 [2024-11-07 10:38:49.308006] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:21.775 [2024-11-07 10:38:49.308015] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:21.775 [2024-11-07 10:38:49.308024] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:21.775 [2024-11-07 10:38:49.308047] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:21.775 [2024-11-07 10:38:49.309843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:21.775 [2024-11-07 10:38:49.309940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:21.775 [2024-11-07 10:38:49.310034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:21.775 [2024-11-07 10:38:49.310036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.775 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:21.775 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@866 -- # return 0 00:10:21.775 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:21.775 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:21.775 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:22.034 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:22.034 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:22.034 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.034 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:22.034 [2024-11-07 10:38:49.480792] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x15f9df0/0x15fe2e0) succeed. 00:10:22.034 [2024-11-07 10:38:49.490134] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x15fb480/0x163f980) succeed. 00:10:22.034 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.034 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:22.034 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:22.034 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:22.034 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.034 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:22.034 Null1 00:10:22.034 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.034 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:22.034 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.034 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:22.034 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.034 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:22.034 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.034 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:22.034 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.034 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:22.034 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.034 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:22.034 [2024-11-07 10:38:49.666151] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:22.034 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.034 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:22.034 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:22.034 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.034 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:22.034 Null2 00:10:22.034 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.034 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:22.034 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.034 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:22.034 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.034 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:22.034 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.034 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:22.034 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.034 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:10:22.034 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.035 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:22.294 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.294 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:22.294 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:22.294 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.294 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:22.294 Null3 00:10:22.294 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.294 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:22.294 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.294 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:22.294 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.294 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:22.294 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.294 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:22.294 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.294 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:10:22.294 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.294 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:22.294 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.294 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:22.294 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:22.294 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.294 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:22.294 Null4 00:10:22.294 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.294 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:22.294 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.294 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:22.294 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.294 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:22.294 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.294 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:22.294 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.294 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:10:22.294 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.294 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:22.294 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.294 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:22.294 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.294 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:22.294 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.294 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:10:22.294 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.294 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:22.294 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.294 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:10:22.294 00:10:22.294 Discovery Log Number of Records 6, Generation counter 6 00:10:22.294 =====Discovery Log Entry 0====== 00:10:22.294 trtype: rdma 00:10:22.294 adrfam: ipv4 00:10:22.294 subtype: current discovery subsystem 00:10:22.294 treq: not required 00:10:22.294 portid: 0 00:10:22.294 trsvcid: 4420 00:10:22.294 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:22.294 traddr: 192.168.100.8 00:10:22.294 eflags: explicit discovery connections, duplicate discovery information 00:10:22.294 rdma_prtype: not specified 00:10:22.294 rdma_qptype: connected 00:10:22.294 rdma_cms: rdma-cm 00:10:22.294 rdma_pkey: 0x0000 00:10:22.294 =====Discovery Log Entry 1====== 00:10:22.294 trtype: rdma 00:10:22.294 adrfam: ipv4 00:10:22.294 subtype: nvme subsystem 00:10:22.294 treq: not required 00:10:22.294 portid: 0 00:10:22.294 trsvcid: 4420 00:10:22.294 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:22.294 traddr: 192.168.100.8 00:10:22.294 eflags: none 00:10:22.294 rdma_prtype: not specified 00:10:22.294 rdma_qptype: connected 00:10:22.294 rdma_cms: rdma-cm 00:10:22.294 rdma_pkey: 0x0000 00:10:22.294 =====Discovery Log Entry 2====== 00:10:22.294 trtype: rdma 00:10:22.294 adrfam: ipv4 00:10:22.294 subtype: nvme subsystem 00:10:22.294 treq: not required 00:10:22.294 portid: 0 00:10:22.294 trsvcid: 4420 00:10:22.294 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:22.294 traddr: 192.168.100.8 00:10:22.294 eflags: none 00:10:22.294 rdma_prtype: not specified 00:10:22.294 rdma_qptype: connected 00:10:22.294 rdma_cms: rdma-cm 00:10:22.294 rdma_pkey: 0x0000 00:10:22.294 =====Discovery Log Entry 3====== 00:10:22.294 trtype: rdma 00:10:22.294 adrfam: ipv4 00:10:22.294 subtype: nvme subsystem 00:10:22.294 treq: not required 00:10:22.294 portid: 0 00:10:22.294 trsvcid: 4420 00:10:22.294 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:22.294 traddr: 192.168.100.8 00:10:22.294 eflags: none 00:10:22.294 rdma_prtype: not specified 00:10:22.294 rdma_qptype: connected 00:10:22.294 rdma_cms: rdma-cm 00:10:22.294 rdma_pkey: 0x0000 00:10:22.294 =====Discovery Log Entry 4====== 00:10:22.294 trtype: rdma 00:10:22.294 adrfam: ipv4 00:10:22.294 subtype: nvme subsystem 00:10:22.294 treq: not required 00:10:22.294 portid: 0 00:10:22.294 trsvcid: 4420 00:10:22.294 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:22.294 traddr: 192.168.100.8 00:10:22.294 eflags: none 00:10:22.294 rdma_prtype: not specified 00:10:22.294 rdma_qptype: connected 00:10:22.294 rdma_cms: rdma-cm 00:10:22.294 rdma_pkey: 0x0000 00:10:22.294 =====Discovery Log Entry 5====== 00:10:22.294 trtype: rdma 00:10:22.294 adrfam: ipv4 00:10:22.294 subtype: discovery subsystem referral 00:10:22.294 treq: not required 00:10:22.294 portid: 0 00:10:22.294 trsvcid: 4430 00:10:22.294 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:22.294 traddr: 192.168.100.8 00:10:22.294 eflags: none 00:10:22.294 rdma_prtype: unrecognized 00:10:22.294 rdma_qptype: unrecognized 00:10:22.294 rdma_cms: unrecognized 00:10:22.294 rdma_pkey: 0x0000 00:10:22.294 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:22.294 Perform nvmf subsystem discovery via RPC 00:10:22.294 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:22.294 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.294 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:22.294 [ 00:10:22.294 { 00:10:22.294 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:22.294 "subtype": "Discovery", 00:10:22.294 "listen_addresses": [ 00:10:22.294 { 00:10:22.294 "trtype": "RDMA", 00:10:22.294 "adrfam": "IPv4", 00:10:22.294 "traddr": "192.168.100.8", 00:10:22.294 "trsvcid": "4420" 00:10:22.294 } 00:10:22.294 ], 00:10:22.294 "allow_any_host": true, 00:10:22.295 "hosts": [] 00:10:22.295 }, 00:10:22.295 { 00:10:22.295 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:22.295 "subtype": "NVMe", 00:10:22.295 "listen_addresses": [ 00:10:22.295 { 00:10:22.295 "trtype": "RDMA", 00:10:22.295 "adrfam": "IPv4", 00:10:22.295 "traddr": "192.168.100.8", 00:10:22.295 "trsvcid": "4420" 00:10:22.295 } 00:10:22.295 ], 00:10:22.295 "allow_any_host": true, 00:10:22.295 "hosts": [], 00:10:22.295 "serial_number": "SPDK00000000000001", 00:10:22.295 "model_number": "SPDK bdev Controller", 00:10:22.295 "max_namespaces": 32, 00:10:22.295 "min_cntlid": 1, 00:10:22.295 "max_cntlid": 65519, 00:10:22.295 "namespaces": [ 00:10:22.295 { 00:10:22.295 "nsid": 1, 00:10:22.295 "bdev_name": "Null1", 00:10:22.295 "name": "Null1", 00:10:22.295 "nguid": "40502A398BFC42BB90FAA2AD01C7E52C", 00:10:22.295 "uuid": "40502a39-8bfc-42bb-90fa-a2ad01c7e52c" 00:10:22.295 } 00:10:22.295 ] 00:10:22.295 }, 00:10:22.295 { 00:10:22.295 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:22.295 "subtype": "NVMe", 00:10:22.295 "listen_addresses": [ 00:10:22.295 { 00:10:22.295 "trtype": "RDMA", 00:10:22.295 "adrfam": "IPv4", 00:10:22.295 "traddr": "192.168.100.8", 00:10:22.295 "trsvcid": "4420" 00:10:22.295 } 00:10:22.295 ], 00:10:22.295 "allow_any_host": true, 00:10:22.295 "hosts": [], 00:10:22.295 "serial_number": "SPDK00000000000002", 00:10:22.295 "model_number": "SPDK bdev Controller", 00:10:22.295 "max_namespaces": 32, 00:10:22.295 "min_cntlid": 1, 00:10:22.295 "max_cntlid": 65519, 00:10:22.295 "namespaces": [ 00:10:22.295 { 00:10:22.295 "nsid": 1, 00:10:22.295 "bdev_name": "Null2", 00:10:22.295 "name": "Null2", 00:10:22.295 "nguid": "DA77A53B49B54E519874CB8E6082A7CE", 00:10:22.295 "uuid": "da77a53b-49b5-4e51-9874-cb8e6082a7ce" 00:10:22.295 } 00:10:22.295 ] 00:10:22.295 }, 00:10:22.295 { 00:10:22.295 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:22.295 "subtype": "NVMe", 00:10:22.295 "listen_addresses": [ 00:10:22.295 { 00:10:22.295 "trtype": "RDMA", 00:10:22.295 "adrfam": "IPv4", 00:10:22.295 "traddr": "192.168.100.8", 00:10:22.295 "trsvcid": "4420" 00:10:22.295 } 00:10:22.295 ], 00:10:22.295 "allow_any_host": true, 00:10:22.295 "hosts": [], 00:10:22.295 "serial_number": "SPDK00000000000003", 00:10:22.295 "model_number": "SPDK bdev Controller", 00:10:22.295 "max_namespaces": 32, 00:10:22.295 "min_cntlid": 1, 00:10:22.295 "max_cntlid": 65519, 00:10:22.295 "namespaces": [ 00:10:22.295 { 00:10:22.295 "nsid": 1, 00:10:22.295 "bdev_name": "Null3", 00:10:22.295 "name": "Null3", 00:10:22.295 "nguid": "26B0120D07084A10BCCEC614E30636DD", 00:10:22.295 "uuid": "26b0120d-0708-4a10-bcce-c614e30636dd" 00:10:22.295 } 00:10:22.295 ] 00:10:22.295 }, 00:10:22.295 { 00:10:22.295 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:22.295 "subtype": "NVMe", 00:10:22.295 "listen_addresses": [ 00:10:22.295 { 00:10:22.295 "trtype": "RDMA", 00:10:22.295 "adrfam": "IPv4", 00:10:22.295 "traddr": "192.168.100.8", 00:10:22.295 "trsvcid": "4420" 00:10:22.295 } 00:10:22.295 ], 00:10:22.295 "allow_any_host": true, 00:10:22.295 "hosts": [], 00:10:22.295 "serial_number": "SPDK00000000000004", 00:10:22.295 "model_number": "SPDK bdev Controller", 00:10:22.295 "max_namespaces": 32, 00:10:22.295 "min_cntlid": 1, 00:10:22.295 "max_cntlid": 65519, 00:10:22.295 "namespaces": [ 00:10:22.295 { 00:10:22.295 "nsid": 1, 00:10:22.295 "bdev_name": "Null4", 00:10:22.295 "name": "Null4", 00:10:22.295 "nguid": "8BED602248C14688925041532433A8BF", 00:10:22.295 "uuid": "8bed6022-48c1-4688-9250-41532433a8bf" 00:10:22.295 } 00:10:22.295 ] 00:10:22.295 } 00:10:22.295 ] 00:10:22.295 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.295 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:22.295 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:22.295 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:22.295 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.295 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:22.295 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.295 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:22.295 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.295 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:22.295 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.295 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:22.295 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:22.295 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.295 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:22.295 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.295 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:22.295 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.295 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:22.554 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.554 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:22.554 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:22.554 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.554 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:22.554 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.554 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:22.554 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.554 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:22.554 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.554 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:22.554 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:22.554 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.554 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:22.554 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.554 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:22.554 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.554 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:22.554 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.554 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:10:22.554 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.554 10:38:49 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:22.554 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.554 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:22.554 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:22.554 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.554 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:22.554 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.554 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:22.554 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:22.554 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:22.554 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:22.554 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:22.554 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:10:22.554 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:22.554 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:22.554 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:10:22.554 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:22.554 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:22.554 rmmod nvme_rdma 00:10:22.554 rmmod nvme_fabrics 00:10:22.554 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:22.554 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:10:22.554 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:10:22.554 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 3701790 ']' 00:10:22.554 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 3701790 00:10:22.554 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' -z 3701790 ']' 00:10:22.554 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # kill -0 3701790 00:10:22.554 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # uname 00:10:22.554 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:22.554 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3701790 00:10:22.554 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:22.555 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:22.555 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3701790' 00:10:22.555 killing process with pid 3701790 00:10:22.555 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@971 -- # kill 3701790 00:10:22.555 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@976 -- # wait 3701790 00:10:22.813 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:22.813 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:22.813 00:10:22.813 real 0m8.587s 00:10:22.813 user 0m6.432s 00:10:22.813 sys 0m5.840s 00:10:22.813 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:22.813 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:22.813 ************************************ 00:10:22.813 END TEST nvmf_target_discovery 00:10:22.813 ************************************ 00:10:22.813 10:38:50 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:10:22.813 10:38:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:22.813 10:38:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:22.813 10:38:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:22.813 ************************************ 00:10:22.813 START TEST nvmf_referrals 00:10:22.813 ************************************ 00:10:22.813 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:10:23.072 * Looking for test storage... 00:10:23.073 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:23.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.073 --rc genhtml_branch_coverage=1 00:10:23.073 --rc genhtml_function_coverage=1 00:10:23.073 --rc genhtml_legend=1 00:10:23.073 --rc geninfo_all_blocks=1 00:10:23.073 --rc geninfo_unexecuted_blocks=1 00:10:23.073 00:10:23.073 ' 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:23.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.073 --rc genhtml_branch_coverage=1 00:10:23.073 --rc genhtml_function_coverage=1 00:10:23.073 --rc genhtml_legend=1 00:10:23.073 --rc geninfo_all_blocks=1 00:10:23.073 --rc geninfo_unexecuted_blocks=1 00:10:23.073 00:10:23.073 ' 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:23.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.073 --rc genhtml_branch_coverage=1 00:10:23.073 --rc genhtml_function_coverage=1 00:10:23.073 --rc genhtml_legend=1 00:10:23.073 --rc geninfo_all_blocks=1 00:10:23.073 --rc geninfo_unexecuted_blocks=1 00:10:23.073 00:10:23.073 ' 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:23.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.073 --rc genhtml_branch_coverage=1 00:10:23.073 --rc genhtml_function_coverage=1 00:10:23.073 --rc genhtml_legend=1 00:10:23.073 --rc geninfo_all_blocks=1 00:10:23.073 --rc geninfo_unexecuted_blocks=1 00:10:23.073 00:10:23.073 ' 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:23.073 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:23.074 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:23.074 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:23.074 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:23.074 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:23.074 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:23.074 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:23.074 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:23.074 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:23.074 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:23.074 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:23.074 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:23.074 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:23.074 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:23.074 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:23.074 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:23.074 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:23.074 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:23.074 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:23.074 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:23.074 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:23.074 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:23.074 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:10:23.074 10:38:50 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:29.643 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:29.643 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:10:29.643 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:29.643 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:29.643 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:29.643 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:29.643 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:29.643 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:10:29.643 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:29.643 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:10:29.643 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:10:29.643 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:10:29.643 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:10:29.643 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:10:29.643 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:10:29.643 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:29.643 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:29.643 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:29.643 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:29.643 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:29.643 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:29.643 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:29.643 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:29.643 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:29.643 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:29.643 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:29.643 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:29.643 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:29.643 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:29.643 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:29.643 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:29.643 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:29.643 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:29.643 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:29.643 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:29.643 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:29.643 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:29.643 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:29.643 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:29.643 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:29.643 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:29.643 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:29.643 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:29.643 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:29.643 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:29.643 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:29.643 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:29.643 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:29.643 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:29.643 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:29.644 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:29.644 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:29.644 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:29.644 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:29.644 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:29.644 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.644 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:29.644 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:29.644 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.644 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:29.644 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:29.644 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.644 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:29.644 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.644 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:29.644 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:29.644 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.644 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:29.644 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:29.644 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.644 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:29.644 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:10:29.644 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:29.644 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:29.644 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:29.644 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # rdma_device_init 00:10:29.644 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:29.644 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # uname 00:10:29.644 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:29.644 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:29.644 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:29.644 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:29.644 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:29.904 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:29.904 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:29.904 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:29.904 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:29.904 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:29.904 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:29.904 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:29.904 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:29.904 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:29.904 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:29.904 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:29.904 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:29.904 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:29.904 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:29.904 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:29.904 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:10:29.904 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:29.904 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:29.904 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:29.904 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:29.904 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:29.904 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:29.904 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:10:29.904 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:29.904 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:29.904 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:29.904 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:29.904 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:29.904 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:29.904 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:29.904 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:29.904 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:29.904 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:29.904 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:29.904 altname enp217s0f0np0 00:10:29.904 altname ens818f0np0 00:10:29.904 inet 192.168.100.8/24 scope global mlx_0_0 00:10:29.904 valid_lft forever preferred_lft forever 00:10:29.904 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:29.904 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:29.904 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:29.904 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:29.904 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:29.904 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:29.904 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:29.904 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:29.904 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:29.904 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:29.904 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:29.904 altname enp217s0f1np1 00:10:29.904 altname ens818f1np1 00:10:29.904 inet 192.168.100.9/24 scope global mlx_0_1 00:10:29.904 valid_lft forever preferred_lft forever 00:10:29.904 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:10:29.904 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:29.904 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:29.904 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:29.905 192.168.100.9' 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:29.905 192.168.100.9' 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # head -n 1 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:29.905 192.168.100.9' 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # tail -n +2 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # head -n 1 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=3705258 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 3705258 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # '[' -z 3705258 ']' 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:29.905 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:30.164 [2024-11-07 10:38:57.586312] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:10:30.164 [2024-11-07 10:38:57.586366] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:30.164 [2024-11-07 10:38:57.662067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:30.164 [2024-11-07 10:38:57.701497] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:30.164 [2024-11-07 10:38:57.701543] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:30.164 [2024-11-07 10:38:57.701552] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:30.164 [2024-11-07 10:38:57.701560] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:30.164 [2024-11-07 10:38:57.701567] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:30.164 [2024-11-07 10:38:57.703408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:30.164 [2024-11-07 10:38:57.703502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:30.164 [2024-11-07 10:38:57.703576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:30.164 [2024-11-07 10:38:57.703579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.164 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:30.164 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@866 -- # return 0 00:10:30.164 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:30.164 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:30.164 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:30.423 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:30.423 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:30.423 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.423 10:38:57 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:30.423 [2024-11-07 10:38:57.869654] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x99fdf0/0x9a42e0) succeed. 00:10:30.423 [2024-11-07 10:38:57.878730] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x9a1480/0x9e5980) succeed. 00:10:30.423 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.423 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:10:30.423 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.423 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:30.423 [2024-11-07 10:38:58.010575] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:10:30.423 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.423 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:10:30.423 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.423 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:30.423 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.423 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:10:30.423 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.423 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:30.423 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.423 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:10:30.423 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.423 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:30.423 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.423 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:30.423 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:30.423 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.423 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:30.423 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.423 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:30.423 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:30.423 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:30.423 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:30.423 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:30.423 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.423 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:30.423 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:30.682 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.682 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:30.682 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:30.682 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:30.682 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:30.682 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:30.682 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:10:30.682 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:30.682 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:30.682 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:30.682 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:30.682 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:10:30.682 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.682 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:30.682 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.682 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:10:30.682 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.682 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:30.682 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.682 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:10:30.682 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.682 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:30.682 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.682 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:30.682 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.682 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:30.682 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:30.682 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.682 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:30.682 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:30.682 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:30.682 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:30.682 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:10:30.682 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:30.682 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:30.941 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:30.941 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:30.941 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:10:30.941 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.941 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:30.941 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.941 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:30.941 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.941 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:30.941 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.941 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:30.941 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:30.941 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:30.941 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:30.941 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.941 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:30.941 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:30.941 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.941 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:30.941 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:30.941 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:30.941 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:30.941 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:30.941 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:10:30.941 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:30.941 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:30.941 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:30.941 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:30.941 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:30.941 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:30.941 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:30.941 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:10:30.941 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:31.200 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:31.200 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:31.200 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:31.200 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:31.200 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:10:31.200 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:31.200 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:31.200 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:31.200 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.200 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:31.200 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.200 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:31.200 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:31.200 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:31.200 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:31.200 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.200 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:31.200 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:31.459 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.459 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:31.459 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:31.459 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:31.459 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:31.459 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:31.459 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:10:31.459 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:31.459 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:31.459 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:31.459 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:31.459 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:31.459 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:31.459 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:31.459 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:10:31.459 10:38:58 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:31.459 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:31.459 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:31.459 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:31.459 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:31.459 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:10:31.459 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:31.719 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:31.719 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:31.719 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.719 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:31.719 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.719 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:31.719 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.719 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:31.719 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:31.719 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.719 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:31.719 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:31.719 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:31.719 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:31.719 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:10:31.719 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:31.719 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:31.719 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:31.719 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:31.719 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:31.719 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:31.719 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:31.719 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:10:31.719 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:31.719 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:31.719 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:10:31.719 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:31.978 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:31.978 rmmod nvme_rdma 00:10:31.978 rmmod nvme_fabrics 00:10:31.978 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:31.978 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:10:31.978 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:10:31.978 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 3705258 ']' 00:10:31.978 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 3705258 00:10:31.978 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' -z 3705258 ']' 00:10:31.978 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # kill -0 3705258 00:10:31.978 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # uname 00:10:31.978 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:31.978 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3705258 00:10:31.978 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:31.978 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:31.978 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3705258' 00:10:31.978 killing process with pid 3705258 00:10:31.978 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@971 -- # kill 3705258 00:10:31.978 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@976 -- # wait 3705258 00:10:32.236 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:32.236 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:32.236 00:10:32.236 real 0m9.271s 00:10:32.236 user 0m10.721s 00:10:32.236 sys 0m6.097s 00:10:32.236 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:32.236 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:32.236 ************************************ 00:10:32.236 END TEST nvmf_referrals 00:10:32.237 ************************************ 00:10:32.237 10:38:59 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:10:32.237 10:38:59 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:32.237 10:38:59 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:32.237 10:38:59 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:32.237 ************************************ 00:10:32.237 START TEST nvmf_connect_disconnect 00:10:32.237 ************************************ 00:10:32.237 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:10:32.237 * Looking for test storage... 00:10:32.237 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:32.237 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:32.237 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:10:32.237 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:32.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.496 --rc genhtml_branch_coverage=1 00:10:32.496 --rc genhtml_function_coverage=1 00:10:32.496 --rc genhtml_legend=1 00:10:32.496 --rc geninfo_all_blocks=1 00:10:32.496 --rc geninfo_unexecuted_blocks=1 00:10:32.496 00:10:32.496 ' 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:32.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.496 --rc genhtml_branch_coverage=1 00:10:32.496 --rc genhtml_function_coverage=1 00:10:32.496 --rc genhtml_legend=1 00:10:32.496 --rc geninfo_all_blocks=1 00:10:32.496 --rc geninfo_unexecuted_blocks=1 00:10:32.496 00:10:32.496 ' 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:32.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.496 --rc genhtml_branch_coverage=1 00:10:32.496 --rc genhtml_function_coverage=1 00:10:32.496 --rc genhtml_legend=1 00:10:32.496 --rc geninfo_all_blocks=1 00:10:32.496 --rc geninfo_unexecuted_blocks=1 00:10:32.496 00:10:32.496 ' 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:32.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.496 --rc genhtml_branch_coverage=1 00:10:32.496 --rc genhtml_function_coverage=1 00:10:32.496 --rc genhtml_legend=1 00:10:32.496 --rc geninfo_all_blocks=1 00:10:32.496 --rc geninfo_unexecuted_blocks=1 00:10:32.496 00:10:32.496 ' 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:32.496 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:32.496 10:38:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:32.496 10:39:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:32.496 10:39:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:32.496 10:39:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:32.496 10:39:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:32.496 10:39:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:32.496 10:39:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:32.496 10:39:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.496 10:39:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.496 10:39:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.496 10:39:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:32.496 10:39:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:32.496 10:39:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:10:32.496 10:39:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:39.065 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:39.065 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:10:39.065 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:39.065 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:39.065 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:39.065 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:39.065 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:39.065 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:10:39.065 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:39.066 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:39.066 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:39.066 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:39.066 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # rdma_device_init 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # uname 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:39.066 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:39.325 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:39.325 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:39.325 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:39.325 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:39.326 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:39.326 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:39.326 altname enp217s0f0np0 00:10:39.326 altname ens818f0np0 00:10:39.326 inet 192.168.100.8/24 scope global mlx_0_0 00:10:39.326 valid_lft forever preferred_lft forever 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:39.326 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:39.326 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:39.326 altname enp217s0f1np1 00:10:39.326 altname ens818f1np1 00:10:39.326 inet 192.168.100.9/24 scope global mlx_0_1 00:10:39.326 valid_lft forever preferred_lft forever 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:39.326 192.168.100.9' 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:39.326 192.168.100.9' 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # head -n 1 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:39.326 192.168.100.9' 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # tail -n +2 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # head -n 1 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=3709557 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 3709557 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # '[' -z 3709557 ']' 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.326 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:39.327 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.327 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:39.327 10:39:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:39.327 [2024-11-07 10:39:06.983516] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:10:39.327 [2024-11-07 10:39:06.983563] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:39.586 [2024-11-07 10:39:07.061489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:39.586 [2024-11-07 10:39:07.102071] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:39.586 [2024-11-07 10:39:07.102112] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:39.586 [2024-11-07 10:39:07.102121] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:39.586 [2024-11-07 10:39:07.102130] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:39.586 [2024-11-07 10:39:07.102137] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:39.586 [2024-11-07 10:39:07.103932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.586 [2024-11-07 10:39:07.104031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:39.586 [2024-11-07 10:39:07.104046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:39.586 [2024-11-07 10:39:07.104053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.152 10:39:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:40.152 10:39:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@866 -- # return 0 00:10:40.152 10:39:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:40.152 10:39:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:40.411 10:39:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:40.411 10:39:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:40.411 10:39:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:10:40.411 10:39:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.411 10:39:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:40.411 [2024-11-07 10:39:07.874930] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:10:40.411 [2024-11-07 10:39:07.896775] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1abadf0/0x1abf2e0) succeed. 00:10:40.411 [2024-11-07 10:39:07.906092] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1abc480/0x1b00980) succeed. 00:10:40.411 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.411 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:10:40.411 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.411 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:40.411 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.411 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:10:40.411 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:40.411 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.411 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:40.411 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.411 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:40.411 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.411 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:40.411 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.411 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:40.411 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.411 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:40.411 [2024-11-07 10:39:08.053478] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:40.411 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.411 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:10:40.411 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:10:40.411 10:39:08 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:10:44.598 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.788 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.977 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.170 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.456 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:00.456 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:00.456 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:00.456 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:11:00.456 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:00.456 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:00.456 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:11:00.456 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:00.456 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:00.456 rmmod nvme_rdma 00:11:00.456 rmmod nvme_fabrics 00:11:00.456 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:00.456 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:11:00.456 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:11:00.456 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 3709557 ']' 00:11:00.456 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 3709557 00:11:00.456 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' -z 3709557 ']' 00:11:00.456 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # kill -0 3709557 00:11:00.456 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # uname 00:11:00.456 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:00.456 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3709557 00:11:00.715 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:00.715 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:00.715 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3709557' 00:11:00.715 killing process with pid 3709557 00:11:00.715 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # kill 3709557 00:11:00.715 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@976 -- # wait 3709557 00:11:00.715 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:00.715 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:00.715 00:11:00.715 real 0m28.596s 00:11:00.715 user 1m26.589s 00:11:00.715 sys 0m6.488s 00:11:00.715 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:00.715 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:00.715 ************************************ 00:11:00.715 END TEST nvmf_connect_disconnect 00:11:00.715 ************************************ 00:11:00.974 10:39:28 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:11:00.974 10:39:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:00.974 10:39:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:00.974 10:39:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:00.974 ************************************ 00:11:00.974 START TEST nvmf_multitarget 00:11:00.974 ************************************ 00:11:00.974 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:11:00.974 * Looking for test storage... 00:11:00.974 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:00.974 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:00.974 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:00.974 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:11:00.974 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:00.974 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:00.974 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:00.974 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:00.974 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:11:00.974 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:11:00.974 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:11:00.974 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:11:00.974 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:11:00.974 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:11:00.974 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:11:00.974 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:00.974 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:11:00.974 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:11:00.974 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:00.974 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:00.974 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:11:00.974 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:11:00.974 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:00.974 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:11:00.974 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:11:00.974 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:11:00.974 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:11:00.974 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:00.974 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:11:00.974 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:11:00.974 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:00.974 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:00.974 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:11:00.974 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:00.974 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:00.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.974 --rc genhtml_branch_coverage=1 00:11:00.974 --rc genhtml_function_coverage=1 00:11:00.974 --rc genhtml_legend=1 00:11:00.974 --rc geninfo_all_blocks=1 00:11:00.974 --rc geninfo_unexecuted_blocks=1 00:11:00.974 00:11:00.974 ' 00:11:00.974 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:00.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.974 --rc genhtml_branch_coverage=1 00:11:00.974 --rc genhtml_function_coverage=1 00:11:00.974 --rc genhtml_legend=1 00:11:00.974 --rc geninfo_all_blocks=1 00:11:00.974 --rc geninfo_unexecuted_blocks=1 00:11:00.974 00:11:00.974 ' 00:11:00.974 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:00.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.974 --rc genhtml_branch_coverage=1 00:11:00.974 --rc genhtml_function_coverage=1 00:11:00.974 --rc genhtml_legend=1 00:11:00.974 --rc geninfo_all_blocks=1 00:11:00.974 --rc geninfo_unexecuted_blocks=1 00:11:00.974 00:11:00.974 ' 00:11:00.975 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:00.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.975 --rc genhtml_branch_coverage=1 00:11:00.975 --rc genhtml_function_coverage=1 00:11:00.975 --rc genhtml_legend=1 00:11:00.975 --rc geninfo_all_blocks=1 00:11:00.975 --rc geninfo_unexecuted_blocks=1 00:11:00.975 00:11:00.975 ' 00:11:00.975 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:00.975 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:00.975 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:00.975 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:00.975 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:00.975 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:00.975 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:00.975 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:00.975 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:00.975 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:00.975 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:00.975 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:00.975 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:00.975 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:00.975 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:00.975 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:00.975 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:00.975 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:00.975 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:00.975 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:11:00.975 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:00.975 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:00.975 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:00.975 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.975 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.975 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.975 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:00.975 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.975 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:11:00.975 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:00.975 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:00.975 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:00.975 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:00.975 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:00.975 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:00.975 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:00.975 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:00.975 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:01.234 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:01.234 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:01.234 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:01.234 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:01.234 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:01.234 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:01.234 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:01.234 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:01.234 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.234 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:01.234 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.234 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:01.234 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:01.234 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:11:01.234 10:39:28 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:07.801 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:07.801 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:07.801 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:07.802 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:07.802 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # rdma_device_init 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # uname 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:07.802 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:07.802 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:07.802 altname enp217s0f0np0 00:11:07.802 altname ens818f0np0 00:11:07.802 inet 192.168.100.8/24 scope global mlx_0_0 00:11:07.802 valid_lft forever preferred_lft forever 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:07.802 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:07.802 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:07.802 altname enp217s0f1np1 00:11:07.802 altname ens818f1np1 00:11:07.802 inet 192.168.100.9/24 scope global mlx_0_1 00:11:07.802 valid_lft forever preferred_lft forever 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:07.802 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:07.803 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:07.803 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:07.803 192.168.100.9' 00:11:07.803 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:07.803 192.168.100.9' 00:11:07.803 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # head -n 1 00:11:07.803 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:07.803 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:07.803 192.168.100.9' 00:11:07.803 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # tail -n +2 00:11:07.803 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # head -n 1 00:11:07.803 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:07.803 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:07.803 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:07.803 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:07.803 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:07.803 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:08.062 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:08.062 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:08.062 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:08.062 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:08.062 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=3716440 00:11:08.062 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:08.062 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 3716440 00:11:08.062 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # '[' -z 3716440 ']' 00:11:08.062 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.062 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:08.062 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.062 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:08.062 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:08.062 [2024-11-07 10:39:35.553239] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:11:08.062 [2024-11-07 10:39:35.553291] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:08.062 [2024-11-07 10:39:35.630691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:08.062 [2024-11-07 10:39:35.670373] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:08.062 [2024-11-07 10:39:35.670413] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:08.062 [2024-11-07 10:39:35.670423] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:08.062 [2024-11-07 10:39:35.670431] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:08.062 [2024-11-07 10:39:35.670438] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:08.062 [2024-11-07 10:39:35.672230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.062 [2024-11-07 10:39:35.672325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:08.062 [2024-11-07 10:39:35.672420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:08.062 [2024-11-07 10:39:35.672422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.320 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:08.320 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@866 -- # return 0 00:11:08.320 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:08.320 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:08.320 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:08.320 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:08.320 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:08.320 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:08.320 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:08.320 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:08.320 10:39:35 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:08.597 "nvmf_tgt_1" 00:11:08.597 10:39:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:08.597 "nvmf_tgt_2" 00:11:08.597 10:39:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:08.597 10:39:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:08.903 10:39:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:08.903 10:39:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:08.903 true 00:11:08.903 10:39:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:08.903 true 00:11:08.903 10:39:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:08.903 10:39:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:09.188 10:39:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:09.188 10:39:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:09.188 10:39:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:09.188 10:39:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:09.188 10:39:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:11:09.188 10:39:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:09.188 10:39:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:09.188 10:39:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:11:09.188 10:39:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:09.188 10:39:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:09.188 rmmod nvme_rdma 00:11:09.188 rmmod nvme_fabrics 00:11:09.188 10:39:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:09.188 10:39:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:11:09.188 10:39:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:11:09.188 10:39:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 3716440 ']' 00:11:09.188 10:39:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 3716440 00:11:09.188 10:39:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' -z 3716440 ']' 00:11:09.188 10:39:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # kill -0 3716440 00:11:09.188 10:39:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # uname 00:11:09.188 10:39:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:09.188 10:39:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3716440 00:11:09.188 10:39:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:09.188 10:39:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:09.188 10:39:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3716440' 00:11:09.188 killing process with pid 3716440 00:11:09.188 10:39:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@971 -- # kill 3716440 00:11:09.188 10:39:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@976 -- # wait 3716440 00:11:09.448 10:39:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:09.448 10:39:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:09.448 00:11:09.448 real 0m8.458s 00:11:09.448 user 0m7.523s 00:11:09.448 sys 0m5.742s 00:11:09.448 10:39:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:09.448 10:39:36 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:09.448 ************************************ 00:11:09.448 END TEST nvmf_multitarget 00:11:09.448 ************************************ 00:11:09.448 10:39:36 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:11:09.448 10:39:36 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:09.448 10:39:36 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:09.448 10:39:36 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:09.448 ************************************ 00:11:09.448 START TEST nvmf_rpc 00:11:09.448 ************************************ 00:11:09.448 10:39:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:11:09.448 * Looking for test storage... 00:11:09.448 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:09.448 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:09.448 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:11:09.448 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:09.448 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:09.448 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:09.448 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:09.448 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:09.448 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:09.448 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:09.448 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:09.448 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:09.448 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:09.448 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:09.448 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:09.448 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:09.448 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:09.448 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:11:09.448 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:09.448 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:09.448 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:09.448 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:11:09.448 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:09.448 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:11:09.448 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:09.448 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:09.448 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:11:09.448 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:09.448 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:09.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.708 --rc genhtml_branch_coverage=1 00:11:09.708 --rc genhtml_function_coverage=1 00:11:09.708 --rc genhtml_legend=1 00:11:09.708 --rc geninfo_all_blocks=1 00:11:09.708 --rc geninfo_unexecuted_blocks=1 00:11:09.708 00:11:09.708 ' 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:09.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.708 --rc genhtml_branch_coverage=1 00:11:09.708 --rc genhtml_function_coverage=1 00:11:09.708 --rc genhtml_legend=1 00:11:09.708 --rc geninfo_all_blocks=1 00:11:09.708 --rc geninfo_unexecuted_blocks=1 00:11:09.708 00:11:09.708 ' 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:09.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.708 --rc genhtml_branch_coverage=1 00:11:09.708 --rc genhtml_function_coverage=1 00:11:09.708 --rc genhtml_legend=1 00:11:09.708 --rc geninfo_all_blocks=1 00:11:09.708 --rc geninfo_unexecuted_blocks=1 00:11:09.708 00:11:09.708 ' 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:09.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.708 --rc genhtml_branch_coverage=1 00:11:09.708 --rc genhtml_function_coverage=1 00:11:09.708 --rc genhtml_legend=1 00:11:09.708 --rc geninfo_all_blocks=1 00:11:09.708 --rc geninfo_unexecuted_blocks=1 00:11:09.708 00:11:09.708 ' 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:09.708 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:11:09.708 10:39:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.274 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:16.274 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:11:16.274 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:16.274 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:16.274 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:16.275 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:16.275 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:16.275 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:16.275 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # rdma_device_init 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:16.275 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # uname 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:16.276 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:16.276 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:16.276 altname enp217s0f0np0 00:11:16.276 altname ens818f0np0 00:11:16.276 inet 192.168.100.8/24 scope global mlx_0_0 00:11:16.276 valid_lft forever preferred_lft forever 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:16.276 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:16.276 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:16.276 altname enp217s0f1np1 00:11:16.276 altname ens818f1np1 00:11:16.276 inet 192.168.100.9/24 scope global mlx_0_1 00:11:16.276 valid_lft forever preferred_lft forever 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:11:16.276 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:16.277 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:16.277 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:16.277 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:16.277 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:16.277 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:16.277 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:16.277 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:16.277 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:16.277 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:16.277 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:16.277 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:16.277 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:16.277 192.168.100.9' 00:11:16.277 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:16.277 192.168.100.9' 00:11:16.277 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # head -n 1 00:11:16.277 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:16.277 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:16.277 192.168.100.9' 00:11:16.277 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # tail -n +2 00:11:16.277 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # head -n 1 00:11:16.277 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:16.277 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:16.277 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:16.277 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:16.277 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:16.277 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:16.277 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:16.277 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:16.277 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:16.277 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.277 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=3720118 00:11:16.277 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:16.277 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 3720118 00:11:16.277 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # '[' -z 3720118 ']' 00:11:16.277 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.277 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:16.277 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.277 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:16.277 10:39:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.536 [2024-11-07 10:39:43.957387] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:11:16.536 [2024-11-07 10:39:43.957434] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:16.536 [2024-11-07 10:39:44.032266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:16.536 [2024-11-07 10:39:44.070352] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:16.536 [2024-11-07 10:39:44.070396] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:16.536 [2024-11-07 10:39:44.070406] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:16.536 [2024-11-07 10:39:44.070414] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:16.536 [2024-11-07 10:39:44.070421] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:16.536 [2024-11-07 10:39:44.072049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:16.536 [2024-11-07 10:39:44.072143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:16.536 [2024-11-07 10:39:44.072235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:16.536 [2024-11-07 10:39:44.072237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.536 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:16.536 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@866 -- # return 0 00:11:16.536 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:16.536 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:16.536 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.796 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:16.796 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:16.796 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.796 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.796 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.796 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:16.796 "tick_rate": 2500000000, 00:11:16.796 "poll_groups": [ 00:11:16.796 { 00:11:16.796 "name": "nvmf_tgt_poll_group_000", 00:11:16.796 "admin_qpairs": 0, 00:11:16.796 "io_qpairs": 0, 00:11:16.796 "current_admin_qpairs": 0, 00:11:16.796 "current_io_qpairs": 0, 00:11:16.796 "pending_bdev_io": 0, 00:11:16.796 "completed_nvme_io": 0, 00:11:16.796 "transports": [] 00:11:16.796 }, 00:11:16.796 { 00:11:16.796 "name": "nvmf_tgt_poll_group_001", 00:11:16.796 "admin_qpairs": 0, 00:11:16.796 "io_qpairs": 0, 00:11:16.796 "current_admin_qpairs": 0, 00:11:16.796 "current_io_qpairs": 0, 00:11:16.796 "pending_bdev_io": 0, 00:11:16.796 "completed_nvme_io": 0, 00:11:16.796 "transports": [] 00:11:16.796 }, 00:11:16.796 { 00:11:16.796 "name": "nvmf_tgt_poll_group_002", 00:11:16.796 "admin_qpairs": 0, 00:11:16.796 "io_qpairs": 0, 00:11:16.796 "current_admin_qpairs": 0, 00:11:16.796 "current_io_qpairs": 0, 00:11:16.796 "pending_bdev_io": 0, 00:11:16.796 "completed_nvme_io": 0, 00:11:16.796 "transports": [] 00:11:16.796 }, 00:11:16.796 { 00:11:16.796 "name": "nvmf_tgt_poll_group_003", 00:11:16.796 "admin_qpairs": 0, 00:11:16.796 "io_qpairs": 0, 00:11:16.796 "current_admin_qpairs": 0, 00:11:16.796 "current_io_qpairs": 0, 00:11:16.796 "pending_bdev_io": 0, 00:11:16.796 "completed_nvme_io": 0, 00:11:16.796 "transports": [] 00:11:16.796 } 00:11:16.796 ] 00:11:16.796 }' 00:11:16.796 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:16.796 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:16.796 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:16.796 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:16.796 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:16.796 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:16.796 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:16.796 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:16.796 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.796 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.796 [2024-11-07 10:39:44.356137] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x68de50/0x692340) succeed. 00:11:16.796 [2024-11-07 10:39:44.366125] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x68f4e0/0x6d39e0) succeed. 00:11:17.055 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.055 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:17.055 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.055 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:17.055 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.055 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:17.055 "tick_rate": 2500000000, 00:11:17.055 "poll_groups": [ 00:11:17.055 { 00:11:17.055 "name": "nvmf_tgt_poll_group_000", 00:11:17.055 "admin_qpairs": 0, 00:11:17.055 "io_qpairs": 0, 00:11:17.055 "current_admin_qpairs": 0, 00:11:17.055 "current_io_qpairs": 0, 00:11:17.055 "pending_bdev_io": 0, 00:11:17.055 "completed_nvme_io": 0, 00:11:17.055 "transports": [ 00:11:17.055 { 00:11:17.055 "trtype": "RDMA", 00:11:17.055 "pending_data_buffer": 0, 00:11:17.055 "devices": [ 00:11:17.055 { 00:11:17.055 "name": "mlx5_0", 00:11:17.055 "polls": 15392, 00:11:17.055 "idle_polls": 15392, 00:11:17.055 "completions": 0, 00:11:17.055 "requests": 0, 00:11:17.055 "request_latency": 0, 00:11:17.055 "pending_free_request": 0, 00:11:17.055 "pending_rdma_read": 0, 00:11:17.055 "pending_rdma_write": 0, 00:11:17.055 "pending_rdma_send": 0, 00:11:17.055 "total_send_wrs": 0, 00:11:17.055 "send_doorbell_updates": 0, 00:11:17.055 "total_recv_wrs": 4096, 00:11:17.055 "recv_doorbell_updates": 1 00:11:17.055 }, 00:11:17.055 { 00:11:17.055 "name": "mlx5_1", 00:11:17.055 "polls": 15392, 00:11:17.055 "idle_polls": 15392, 00:11:17.055 "completions": 0, 00:11:17.055 "requests": 0, 00:11:17.055 "request_latency": 0, 00:11:17.055 "pending_free_request": 0, 00:11:17.055 "pending_rdma_read": 0, 00:11:17.055 "pending_rdma_write": 0, 00:11:17.055 "pending_rdma_send": 0, 00:11:17.055 "total_send_wrs": 0, 00:11:17.055 "send_doorbell_updates": 0, 00:11:17.055 "total_recv_wrs": 4096, 00:11:17.055 "recv_doorbell_updates": 1 00:11:17.055 } 00:11:17.055 ] 00:11:17.055 } 00:11:17.055 ] 00:11:17.055 }, 00:11:17.055 { 00:11:17.055 "name": "nvmf_tgt_poll_group_001", 00:11:17.055 "admin_qpairs": 0, 00:11:17.055 "io_qpairs": 0, 00:11:17.055 "current_admin_qpairs": 0, 00:11:17.055 "current_io_qpairs": 0, 00:11:17.055 "pending_bdev_io": 0, 00:11:17.055 "completed_nvme_io": 0, 00:11:17.055 "transports": [ 00:11:17.055 { 00:11:17.055 "trtype": "RDMA", 00:11:17.055 "pending_data_buffer": 0, 00:11:17.055 "devices": [ 00:11:17.055 { 00:11:17.055 "name": "mlx5_0", 00:11:17.055 "polls": 9740, 00:11:17.055 "idle_polls": 9740, 00:11:17.055 "completions": 0, 00:11:17.055 "requests": 0, 00:11:17.055 "request_latency": 0, 00:11:17.055 "pending_free_request": 0, 00:11:17.055 "pending_rdma_read": 0, 00:11:17.055 "pending_rdma_write": 0, 00:11:17.055 "pending_rdma_send": 0, 00:11:17.055 "total_send_wrs": 0, 00:11:17.055 "send_doorbell_updates": 0, 00:11:17.055 "total_recv_wrs": 4096, 00:11:17.055 "recv_doorbell_updates": 1 00:11:17.055 }, 00:11:17.055 { 00:11:17.055 "name": "mlx5_1", 00:11:17.055 "polls": 9740, 00:11:17.055 "idle_polls": 9740, 00:11:17.055 "completions": 0, 00:11:17.055 "requests": 0, 00:11:17.055 "request_latency": 0, 00:11:17.055 "pending_free_request": 0, 00:11:17.055 "pending_rdma_read": 0, 00:11:17.055 "pending_rdma_write": 0, 00:11:17.055 "pending_rdma_send": 0, 00:11:17.055 "total_send_wrs": 0, 00:11:17.055 "send_doorbell_updates": 0, 00:11:17.055 "total_recv_wrs": 4096, 00:11:17.055 "recv_doorbell_updates": 1 00:11:17.055 } 00:11:17.055 ] 00:11:17.055 } 00:11:17.055 ] 00:11:17.055 }, 00:11:17.055 { 00:11:17.055 "name": "nvmf_tgt_poll_group_002", 00:11:17.055 "admin_qpairs": 0, 00:11:17.055 "io_qpairs": 0, 00:11:17.055 "current_admin_qpairs": 0, 00:11:17.055 "current_io_qpairs": 0, 00:11:17.055 "pending_bdev_io": 0, 00:11:17.055 "completed_nvme_io": 0, 00:11:17.055 "transports": [ 00:11:17.055 { 00:11:17.055 "trtype": "RDMA", 00:11:17.055 "pending_data_buffer": 0, 00:11:17.055 "devices": [ 00:11:17.055 { 00:11:17.055 "name": "mlx5_0", 00:11:17.055 "polls": 5508, 00:11:17.055 "idle_polls": 5508, 00:11:17.055 "completions": 0, 00:11:17.055 "requests": 0, 00:11:17.055 "request_latency": 0, 00:11:17.055 "pending_free_request": 0, 00:11:17.055 "pending_rdma_read": 0, 00:11:17.055 "pending_rdma_write": 0, 00:11:17.055 "pending_rdma_send": 0, 00:11:17.055 "total_send_wrs": 0, 00:11:17.055 "send_doorbell_updates": 0, 00:11:17.055 "total_recv_wrs": 4096, 00:11:17.055 "recv_doorbell_updates": 1 00:11:17.055 }, 00:11:17.055 { 00:11:17.055 "name": "mlx5_1", 00:11:17.055 "polls": 5508, 00:11:17.055 "idle_polls": 5508, 00:11:17.055 "completions": 0, 00:11:17.055 "requests": 0, 00:11:17.055 "request_latency": 0, 00:11:17.055 "pending_free_request": 0, 00:11:17.055 "pending_rdma_read": 0, 00:11:17.055 "pending_rdma_write": 0, 00:11:17.055 "pending_rdma_send": 0, 00:11:17.055 "total_send_wrs": 0, 00:11:17.055 "send_doorbell_updates": 0, 00:11:17.055 "total_recv_wrs": 4096, 00:11:17.055 "recv_doorbell_updates": 1 00:11:17.055 } 00:11:17.055 ] 00:11:17.055 } 00:11:17.055 ] 00:11:17.055 }, 00:11:17.055 { 00:11:17.055 "name": "nvmf_tgt_poll_group_003", 00:11:17.055 "admin_qpairs": 0, 00:11:17.055 "io_qpairs": 0, 00:11:17.055 "current_admin_qpairs": 0, 00:11:17.055 "current_io_qpairs": 0, 00:11:17.055 "pending_bdev_io": 0, 00:11:17.055 "completed_nvme_io": 0, 00:11:17.055 "transports": [ 00:11:17.055 { 00:11:17.055 "trtype": "RDMA", 00:11:17.055 "pending_data_buffer": 0, 00:11:17.055 "devices": [ 00:11:17.055 { 00:11:17.056 "name": "mlx5_0", 00:11:17.056 "polls": 897, 00:11:17.056 "idle_polls": 897, 00:11:17.056 "completions": 0, 00:11:17.056 "requests": 0, 00:11:17.056 "request_latency": 0, 00:11:17.056 "pending_free_request": 0, 00:11:17.056 "pending_rdma_read": 0, 00:11:17.056 "pending_rdma_write": 0, 00:11:17.056 "pending_rdma_send": 0, 00:11:17.056 "total_send_wrs": 0, 00:11:17.056 "send_doorbell_updates": 0, 00:11:17.056 "total_recv_wrs": 4096, 00:11:17.056 "recv_doorbell_updates": 1 00:11:17.056 }, 00:11:17.056 { 00:11:17.056 "name": "mlx5_1", 00:11:17.056 "polls": 897, 00:11:17.056 "idle_polls": 897, 00:11:17.056 "completions": 0, 00:11:17.056 "requests": 0, 00:11:17.056 "request_latency": 0, 00:11:17.056 "pending_free_request": 0, 00:11:17.056 "pending_rdma_read": 0, 00:11:17.056 "pending_rdma_write": 0, 00:11:17.056 "pending_rdma_send": 0, 00:11:17.056 "total_send_wrs": 0, 00:11:17.056 "send_doorbell_updates": 0, 00:11:17.056 "total_recv_wrs": 4096, 00:11:17.056 "recv_doorbell_updates": 1 00:11:17.056 } 00:11:17.056 ] 00:11:17.056 } 00:11:17.056 ] 00:11:17.056 } 00:11:17.056 ] 00:11:17.056 }' 00:11:17.056 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:17.056 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:17.056 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:17.056 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:17.056 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:17.056 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:17.056 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:17.056 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:17.056 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:17.056 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:17.056 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:11:17.056 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:11:17.056 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:11:17.056 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:11:17.056 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:17.056 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:11:17.056 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:11:17.056 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:11:17.056 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:11:17.056 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:11:17.056 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:11:17.056 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:17.056 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:11:17.315 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:11:17.315 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:17.315 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:17.315 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:17.315 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.315 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:17.315 Malloc1 00:11:17.315 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.315 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:17.315 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.315 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:17.315 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.315 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:17.315 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.315 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:17.315 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.315 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:17.315 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.315 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:17.315 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.315 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:17.315 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.315 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:17.315 [2024-11-07 10:39:44.825564] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:17.315 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.315 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:11:17.315 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:11:17.315 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:11:17.315 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:11:17.315 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:17.315 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:11:17.315 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:17.315 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:11:17.315 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:17.315 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:11:17.315 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:11:17.315 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:11:17.315 [2024-11-07 10:39:44.871827] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:11:17.315 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:17.315 could not add new controller: failed to write to nvme-fabrics device 00:11:17.315 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:11:17.315 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:17.315 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:17.315 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:17.315 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:17.315 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.315 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:17.315 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.315 10:39:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:18.250 10:39:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:18.250 10:39:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:11:18.250 10:39:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:18.250 10:39:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:18.250 10:39:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:11:20.786 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:20.786 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:20.786 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:20.786 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:20.786 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:20.786 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:11:20.786 10:39:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:21.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.353 10:39:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:21.353 10:39:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:11:21.353 10:39:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:21.353 10:39:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:21.353 10:39:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:21.353 10:39:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:21.353 10:39:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:11:21.353 10:39:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:21.353 10:39:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.353 10:39:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:21.353 10:39:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.353 10:39:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:21.353 10:39:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:11:21.353 10:39:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:21.353 10:39:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:11:21.353 10:39:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:21.353 10:39:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:11:21.353 10:39:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:21.353 10:39:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:11:21.353 10:39:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:21.353 10:39:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:11:21.353 10:39:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:11:21.353 10:39:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:21.353 [2024-11-07 10:39:48.963501] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:11:21.353 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:21.353 could not add new controller: failed to write to nvme-fabrics device 00:11:21.353 10:39:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:11:21.353 10:39:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:21.353 10:39:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:21.353 10:39:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:21.353 10:39:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:21.353 10:39:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.353 10:39:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:21.353 10:39:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.353 10:39:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:22.728 10:39:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:22.728 10:39:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:11:22.728 10:39:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:22.728 10:39:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:22.728 10:39:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:11:24.632 10:39:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:24.632 10:39:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:24.632 10:39:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:24.632 10:39:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:24.632 10:39:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:24.632 10:39:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:11:24.632 10:39:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:25.567 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.567 10:39:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:25.567 10:39:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:11:25.567 10:39:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:25.567 10:39:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:25.567 10:39:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:25.567 10:39:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:25.567 10:39:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:11:25.567 10:39:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:25.567 10:39:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.567 10:39:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:25.567 10:39:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.567 10:39:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:25.567 10:39:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:25.567 10:39:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:25.567 10:39:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.567 10:39:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:25.567 10:39:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.567 10:39:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:25.567 10:39:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.567 10:39:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:25.567 [2024-11-07 10:39:53.064577] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:25.567 10:39:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.567 10:39:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:25.567 10:39:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.567 10:39:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:25.567 10:39:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.567 10:39:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:25.567 10:39:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.567 10:39:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:25.567 10:39:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.567 10:39:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:26.502 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:26.502 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:11:26.502 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:26.502 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:26.502 10:39:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:11:28.406 10:39:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:28.406 10:39:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:28.406 10:39:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:28.665 10:39:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:28.665 10:39:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:28.665 10:39:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:11:28.665 10:39:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:29.601 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.601 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:29.601 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:11:29.601 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:29.601 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:29.601 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:29.601 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:29.601 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:11:29.601 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:29.601 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.601 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.601 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.601 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:29.601 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.601 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.601 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.601 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:29.601 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:29.601 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.601 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.601 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.601 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:29.601 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.601 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.601 [2024-11-07 10:39:57.102625] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:29.601 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.601 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:29.601 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.601 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.601 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.601 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:29.601 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.601 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.601 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.602 10:39:57 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:30.538 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:30.538 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:11:30.538 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:30.538 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:30.538 10:39:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:11:32.442 10:40:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:32.442 10:40:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:32.442 10:40:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:32.700 10:40:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:32.700 10:40:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:32.700 10:40:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:11:32.700 10:40:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:33.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.636 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:33.636 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:11:33.636 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:33.636 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:33.636 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:33.636 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:33.636 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:11:33.636 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:33.636 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.636 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.636 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.636 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:33.636 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.636 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.636 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.636 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:33.636 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:33.636 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.636 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.636 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.636 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:33.636 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.636 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.636 [2024-11-07 10:40:01.161228] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:33.636 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.636 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:33.636 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.636 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.636 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.636 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:33.636 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.636 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.636 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.636 10:40:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:34.572 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:34.572 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:11:34.572 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:34.572 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:34.572 10:40:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:11:37.111 10:40:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:37.111 10:40:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:37.111 10:40:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:37.111 10:40:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:37.111 10:40:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:37.111 10:40:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:11:37.111 10:40:04 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:37.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.679 10:40:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:37.679 10:40:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:11:37.679 10:40:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:37.679 10:40:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:37.679 10:40:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:37.679 10:40:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:37.679 10:40:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:11:37.680 10:40:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:37.680 10:40:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.680 10:40:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:37.680 10:40:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.680 10:40:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:37.680 10:40:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.680 10:40:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:37.680 10:40:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.680 10:40:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:37.680 10:40:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:37.680 10:40:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.680 10:40:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:37.680 10:40:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.680 10:40:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:37.680 10:40:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.680 10:40:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:37.680 [2024-11-07 10:40:05.205717] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:37.680 10:40:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.680 10:40:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:37.680 10:40:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.680 10:40:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:37.680 10:40:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.680 10:40:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:37.680 10:40:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.680 10:40:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:37.680 10:40:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.680 10:40:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:38.616 10:40:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:38.616 10:40:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:11:38.616 10:40:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:38.616 10:40:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:38.616 10:40:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:11:41.148 10:40:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:41.148 10:40:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:41.148 10:40:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:41.148 10:40:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:41.148 10:40:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:41.148 10:40:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:11:41.148 10:40:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:41.716 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.716 10:40:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:41.716 10:40:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:11:41.716 10:40:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:41.716 10:40:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:41.716 10:40:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:41.716 10:40:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:41.716 10:40:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:11:41.716 10:40:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:41.716 10:40:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.716 10:40:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.716 10:40:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.716 10:40:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:41.716 10:40:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.716 10:40:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.716 10:40:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.716 10:40:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:41.716 10:40:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:41.716 10:40:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.716 10:40:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.716 10:40:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.716 10:40:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:41.716 10:40:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.716 10:40:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.716 [2024-11-07 10:40:09.256064] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:41.716 10:40:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.716 10:40:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:41.716 10:40:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.716 10:40:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.716 10:40:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.716 10:40:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:41.716 10:40:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.716 10:40:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.716 10:40:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.716 10:40:09 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:42.651 10:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:42.651 10:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:11:42.651 10:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:42.651 10:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:42.651 10:40:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:11:45.181 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:45.182 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:45.182 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:45.182 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:45.182 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:45.182 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:11:45.182 10:40:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:45.749 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.749 [2024-11-07 10:40:13.307918] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.749 [2024-11-07 10:40:13.356372] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:45.749 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.750 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.750 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.750 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:45.750 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.750 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.750 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.750 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:45.750 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:45.750 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.750 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.750 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.750 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:45.750 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.750 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.750 [2024-11-07 10:40:13.404505] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:45.750 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.750 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:45.750 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.750 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.750 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.750 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:45.750 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.750 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.009 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.009 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:46.009 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.009 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.009 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.009 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:46.009 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.009 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.009 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.009 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:46.009 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:46.009 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.009 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.009 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.009 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:46.009 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.009 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.009 [2024-11-07 10:40:13.452672] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:46.009 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.009 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:46.009 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.009 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.009 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.009 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:46.009 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.009 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.009 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.009 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:46.009 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.009 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.009 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.009 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:46.009 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.009 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.009 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.009 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:46.010 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:46.010 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.010 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.010 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.010 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:46.010 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.010 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.010 [2024-11-07 10:40:13.500851] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:46.010 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.010 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:46.010 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.010 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.010 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.010 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:46.010 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.010 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.010 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.010 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:46.010 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.010 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.010 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.010 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:46.010 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.010 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.010 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.010 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:46.010 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.010 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.010 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.010 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:11:46.010 "tick_rate": 2500000000, 00:11:46.010 "poll_groups": [ 00:11:46.010 { 00:11:46.010 "name": "nvmf_tgt_poll_group_000", 00:11:46.010 "admin_qpairs": 2, 00:11:46.010 "io_qpairs": 27, 00:11:46.010 "current_admin_qpairs": 0, 00:11:46.010 "current_io_qpairs": 0, 00:11:46.010 "pending_bdev_io": 0, 00:11:46.010 "completed_nvme_io": 78, 00:11:46.010 "transports": [ 00:11:46.010 { 00:11:46.010 "trtype": "RDMA", 00:11:46.010 "pending_data_buffer": 0, 00:11:46.010 "devices": [ 00:11:46.010 { 00:11:46.010 "name": "mlx5_0", 00:11:46.010 "polls": 3642276, 00:11:46.010 "idle_polls": 3642030, 00:11:46.010 "completions": 265, 00:11:46.010 "requests": 132, 00:11:46.010 "request_latency": 22320594, 00:11:46.010 "pending_free_request": 0, 00:11:46.010 "pending_rdma_read": 0, 00:11:46.010 "pending_rdma_write": 0, 00:11:46.010 "pending_rdma_send": 0, 00:11:46.010 "total_send_wrs": 209, 00:11:46.010 "send_doorbell_updates": 122, 00:11:46.010 "total_recv_wrs": 4228, 00:11:46.010 "recv_doorbell_updates": 122 00:11:46.010 }, 00:11:46.010 { 00:11:46.010 "name": "mlx5_1", 00:11:46.010 "polls": 3642276, 00:11:46.010 "idle_polls": 3642276, 00:11:46.010 "completions": 0, 00:11:46.010 "requests": 0, 00:11:46.010 "request_latency": 0, 00:11:46.010 "pending_free_request": 0, 00:11:46.010 "pending_rdma_read": 0, 00:11:46.010 "pending_rdma_write": 0, 00:11:46.010 "pending_rdma_send": 0, 00:11:46.010 "total_send_wrs": 0, 00:11:46.010 "send_doorbell_updates": 0, 00:11:46.010 "total_recv_wrs": 4096, 00:11:46.010 "recv_doorbell_updates": 1 00:11:46.010 } 00:11:46.010 ] 00:11:46.010 } 00:11:46.010 ] 00:11:46.010 }, 00:11:46.010 { 00:11:46.010 "name": "nvmf_tgt_poll_group_001", 00:11:46.010 "admin_qpairs": 2, 00:11:46.010 "io_qpairs": 26, 00:11:46.010 "current_admin_qpairs": 0, 00:11:46.010 "current_io_qpairs": 0, 00:11:46.010 "pending_bdev_io": 0, 00:11:46.010 "completed_nvme_io": 127, 00:11:46.010 "transports": [ 00:11:46.010 { 00:11:46.010 "trtype": "RDMA", 00:11:46.010 "pending_data_buffer": 0, 00:11:46.010 "devices": [ 00:11:46.010 { 00:11:46.010 "name": "mlx5_0", 00:11:46.010 "polls": 3523690, 00:11:46.010 "idle_polls": 3523372, 00:11:46.010 "completions": 360, 00:11:46.010 "requests": 180, 00:11:46.010 "request_latency": 36180060, 00:11:46.010 "pending_free_request": 0, 00:11:46.010 "pending_rdma_read": 0, 00:11:46.010 "pending_rdma_write": 0, 00:11:46.010 "pending_rdma_send": 0, 00:11:46.010 "total_send_wrs": 306, 00:11:46.010 "send_doorbell_updates": 156, 00:11:46.010 "total_recv_wrs": 4276, 00:11:46.010 "recv_doorbell_updates": 157 00:11:46.010 }, 00:11:46.010 { 00:11:46.010 "name": "mlx5_1", 00:11:46.010 "polls": 3523690, 00:11:46.010 "idle_polls": 3523690, 00:11:46.010 "completions": 0, 00:11:46.010 "requests": 0, 00:11:46.010 "request_latency": 0, 00:11:46.010 "pending_free_request": 0, 00:11:46.010 "pending_rdma_read": 0, 00:11:46.010 "pending_rdma_write": 0, 00:11:46.010 "pending_rdma_send": 0, 00:11:46.010 "total_send_wrs": 0, 00:11:46.010 "send_doorbell_updates": 0, 00:11:46.010 "total_recv_wrs": 4096, 00:11:46.010 "recv_doorbell_updates": 1 00:11:46.010 } 00:11:46.010 ] 00:11:46.010 } 00:11:46.010 ] 00:11:46.010 }, 00:11:46.010 { 00:11:46.010 "name": "nvmf_tgt_poll_group_002", 00:11:46.010 "admin_qpairs": 1, 00:11:46.010 "io_qpairs": 26, 00:11:46.010 "current_admin_qpairs": 0, 00:11:46.010 "current_io_qpairs": 0, 00:11:46.010 "pending_bdev_io": 0, 00:11:46.010 "completed_nvme_io": 124, 00:11:46.010 "transports": [ 00:11:46.010 { 00:11:46.010 "trtype": "RDMA", 00:11:46.010 "pending_data_buffer": 0, 00:11:46.010 "devices": [ 00:11:46.010 { 00:11:46.010 "name": "mlx5_0", 00:11:46.010 "polls": 3709642, 00:11:46.010 "idle_polls": 3709380, 00:11:46.010 "completions": 303, 00:11:46.010 "requests": 151, 00:11:46.010 "request_latency": 33920314, 00:11:46.010 "pending_free_request": 0, 00:11:46.010 "pending_rdma_read": 0, 00:11:46.010 "pending_rdma_write": 0, 00:11:46.010 "pending_rdma_send": 0, 00:11:46.010 "total_send_wrs": 262, 00:11:46.010 "send_doorbell_updates": 128, 00:11:46.010 "total_recv_wrs": 4247, 00:11:46.010 "recv_doorbell_updates": 128 00:11:46.010 }, 00:11:46.010 { 00:11:46.010 "name": "mlx5_1", 00:11:46.010 "polls": 3709642, 00:11:46.010 "idle_polls": 3709642, 00:11:46.010 "completions": 0, 00:11:46.010 "requests": 0, 00:11:46.010 "request_latency": 0, 00:11:46.010 "pending_free_request": 0, 00:11:46.010 "pending_rdma_read": 0, 00:11:46.010 "pending_rdma_write": 0, 00:11:46.010 "pending_rdma_send": 0, 00:11:46.010 "total_send_wrs": 0, 00:11:46.010 "send_doorbell_updates": 0, 00:11:46.010 "total_recv_wrs": 4096, 00:11:46.010 "recv_doorbell_updates": 1 00:11:46.010 } 00:11:46.010 ] 00:11:46.010 } 00:11:46.010 ] 00:11:46.010 }, 00:11:46.010 { 00:11:46.010 "name": "nvmf_tgt_poll_group_003", 00:11:46.010 "admin_qpairs": 2, 00:11:46.010 "io_qpairs": 26, 00:11:46.010 "current_admin_qpairs": 0, 00:11:46.010 "current_io_qpairs": 0, 00:11:46.010 "pending_bdev_io": 0, 00:11:46.010 "completed_nvme_io": 126, 00:11:46.010 "transports": [ 00:11:46.010 { 00:11:46.010 "trtype": "RDMA", 00:11:46.010 "pending_data_buffer": 0, 00:11:46.010 "devices": [ 00:11:46.010 { 00:11:46.010 "name": "mlx5_0", 00:11:46.010 "polls": 2861840, 00:11:46.010 "idle_polls": 2861523, 00:11:46.010 "completions": 358, 00:11:46.010 "requests": 179, 00:11:46.010 "request_latency": 37781946, 00:11:46.010 "pending_free_request": 0, 00:11:46.010 "pending_rdma_read": 0, 00:11:46.010 "pending_rdma_write": 0, 00:11:46.010 "pending_rdma_send": 0, 00:11:46.010 "total_send_wrs": 304, 00:11:46.010 "send_doorbell_updates": 154, 00:11:46.010 "total_recv_wrs": 4275, 00:11:46.010 "recv_doorbell_updates": 155 00:11:46.010 }, 00:11:46.010 { 00:11:46.010 "name": "mlx5_1", 00:11:46.010 "polls": 2861840, 00:11:46.010 "idle_polls": 2861840, 00:11:46.010 "completions": 0, 00:11:46.010 "requests": 0, 00:11:46.010 "request_latency": 0, 00:11:46.010 "pending_free_request": 0, 00:11:46.010 "pending_rdma_read": 0, 00:11:46.010 "pending_rdma_write": 0, 00:11:46.010 "pending_rdma_send": 0, 00:11:46.010 "total_send_wrs": 0, 00:11:46.010 "send_doorbell_updates": 0, 00:11:46.010 "total_recv_wrs": 4096, 00:11:46.010 "recv_doorbell_updates": 1 00:11:46.010 } 00:11:46.010 ] 00:11:46.010 } 00:11:46.010 ] 00:11:46.010 } 00:11:46.010 ] 00:11:46.010 }' 00:11:46.010 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:46.010 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:46.011 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:46.011 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:46.011 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:46.011 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:46.011 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:46.011 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:46.011 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:46.270 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:11:46.270 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:11:46.270 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:11:46.270 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:11:46.270 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:11:46.270 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:46.270 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # (( 1286 > 0 )) 00:11:46.270 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:11:46.270 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:11:46.270 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:46.270 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:11:46.270 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # (( 130202914 > 0 )) 00:11:46.270 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:46.270 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:11:46.270 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:46.270 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:11:46.270 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:46.270 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:46.270 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:11:46.270 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:46.270 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:46.270 rmmod nvme_rdma 00:11:46.270 rmmod nvme_fabrics 00:11:46.270 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:46.270 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:11:46.270 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:11:46.270 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 3720118 ']' 00:11:46.270 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 3720118 00:11:46.270 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' -z 3720118 ']' 00:11:46.270 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # kill -0 3720118 00:11:46.270 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # uname 00:11:46.270 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:46.270 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3720118 00:11:46.270 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:46.270 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:46.270 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3720118' 00:11:46.270 killing process with pid 3720118 00:11:46.270 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@971 -- # kill 3720118 00:11:46.270 10:40:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@976 -- # wait 3720118 00:11:46.529 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:46.529 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:46.529 00:11:46.529 real 0m37.210s 00:11:46.529 user 2m2.091s 00:11:46.529 sys 0m6.827s 00:11:46.529 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:46.529 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.529 ************************************ 00:11:46.529 END TEST nvmf_rpc 00:11:46.529 ************************************ 00:11:46.529 10:40:14 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:11:46.529 10:40:14 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:46.529 10:40:14 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:46.529 10:40:14 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:46.529 ************************************ 00:11:46.529 START TEST nvmf_invalid 00:11:46.529 ************************************ 00:11:46.529 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:11:46.788 * Looking for test storage... 00:11:46.788 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:46.788 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:46.788 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:11:46.788 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:46.788 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:46.788 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:46.788 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:46.788 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:46.788 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:11:46.788 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:11:46.788 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:11:46.788 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:11:46.788 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:11:46.788 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:11:46.788 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:11:46.788 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:46.788 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:11:46.788 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:11:46.788 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:46.788 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:46.788 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:11:46.788 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:11:46.788 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:46.788 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:11:46.788 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:11:46.788 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:11:46.788 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:11:46.788 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:46.788 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:11:46.788 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:11:46.788 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:46.788 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:46.788 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:46.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.789 --rc genhtml_branch_coverage=1 00:11:46.789 --rc genhtml_function_coverage=1 00:11:46.789 --rc genhtml_legend=1 00:11:46.789 --rc geninfo_all_blocks=1 00:11:46.789 --rc geninfo_unexecuted_blocks=1 00:11:46.789 00:11:46.789 ' 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:46.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.789 --rc genhtml_branch_coverage=1 00:11:46.789 --rc genhtml_function_coverage=1 00:11:46.789 --rc genhtml_legend=1 00:11:46.789 --rc geninfo_all_blocks=1 00:11:46.789 --rc geninfo_unexecuted_blocks=1 00:11:46.789 00:11:46.789 ' 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:46.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.789 --rc genhtml_branch_coverage=1 00:11:46.789 --rc genhtml_function_coverage=1 00:11:46.789 --rc genhtml_legend=1 00:11:46.789 --rc geninfo_all_blocks=1 00:11:46.789 --rc geninfo_unexecuted_blocks=1 00:11:46.789 00:11:46.789 ' 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:46.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.789 --rc genhtml_branch_coverage=1 00:11:46.789 --rc genhtml_function_coverage=1 00:11:46.789 --rc genhtml_legend=1 00:11:46.789 --rc geninfo_all_blocks=1 00:11:46.789 --rc geninfo_unexecuted_blocks=1 00:11:46.789 00:11:46.789 ' 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:46.789 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:11:46.789 10:40:14 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:53.456 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:53.456 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:11:53.456 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:53.456 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:53.456 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:53.456 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:53.456 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:53.456 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:11:53.456 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:53.456 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:11:53.456 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:11:53.456 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:11:53.456 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:11:53.456 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:11:53.456 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:11:53.456 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:53.456 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:53.456 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:53.456 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:53.456 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:53.456 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:53.456 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:53.456 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:53.456 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:53.456 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:53.456 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:53.456 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:53.456 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:53.456 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:53.456 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:53.456 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:53.456 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:53.456 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:53.456 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:53.456 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:53.456 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:53.456 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:53.456 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:53.456 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:53.456 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:53.456 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:53.456 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:53.457 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:53.457 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:53.457 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:53.457 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:53.457 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:53.457 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:53.457 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:53.457 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:53.457 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:53.457 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:53.457 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:53.457 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:53.457 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:53.457 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.457 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:53.457 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:53.457 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.457 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:53.457 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:53.457 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.457 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:53.457 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.457 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:53.457 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:53.457 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.457 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:53.457 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:53.457 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.457 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:53.457 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:11:53.457 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:53.457 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:53.457 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:53.457 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # rdma_device_init 00:11:53.457 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:53.457 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # uname 00:11:53.457 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:53.457 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:53.457 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:53.717 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:53.717 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:53.717 altname enp217s0f0np0 00:11:53.717 altname ens818f0np0 00:11:53.717 inet 192.168.100.8/24 scope global mlx_0_0 00:11:53.717 valid_lft forever preferred_lft forever 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:53.717 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:53.717 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:53.717 altname enp217s0f1np1 00:11:53.717 altname ens818f1np1 00:11:53.717 inet 192.168.100.9/24 scope global mlx_0_1 00:11:53.717 valid_lft forever preferred_lft forever 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:53.717 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:53.717 192.168.100.9' 00:11:53.718 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:53.718 192.168.100.9' 00:11:53.718 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # head -n 1 00:11:53.718 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:53.718 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:53.718 192.168.100.9' 00:11:53.718 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # tail -n +2 00:11:53.718 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # head -n 1 00:11:53.718 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:53.718 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:53.718 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:53.718 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:53.718 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:53.718 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:53.718 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:53.718 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:53.718 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:53.718 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:53.718 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=3728602 00:11:53.718 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:53.718 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 3728602 00:11:53.718 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # '[' -z 3728602 ']' 00:11:53.718 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.718 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:53.718 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.718 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:53.718 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:53.977 [2024-11-07 10:40:21.404165] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:11:53.977 [2024-11-07 10:40:21.404212] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:53.977 [2024-11-07 10:40:21.482037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:53.977 [2024-11-07 10:40:21.520284] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:53.977 [2024-11-07 10:40:21.520326] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:53.977 [2024-11-07 10:40:21.520336] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:53.977 [2024-11-07 10:40:21.520344] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:53.977 [2024-11-07 10:40:21.520367] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:53.977 [2024-11-07 10:40:21.521967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:53.977 [2024-11-07 10:40:21.522063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:53.977 [2024-11-07 10:40:21.522155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:53.977 [2024-11-07 10:40:21.522157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.977 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:53.977 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@866 -- # return 0 00:11:53.977 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:53.977 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:53.977 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:54.236 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:54.236 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:54.236 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode28040 00:11:54.236 [2024-11-07 10:40:21.835272] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:54.236 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:11:54.236 { 00:11:54.236 "nqn": "nqn.2016-06.io.spdk:cnode28040", 00:11:54.236 "tgt_name": "foobar", 00:11:54.236 "method": "nvmf_create_subsystem", 00:11:54.236 "req_id": 1 00:11:54.236 } 00:11:54.236 Got JSON-RPC error response 00:11:54.236 response: 00:11:54.236 { 00:11:54.236 "code": -32603, 00:11:54.236 "message": "Unable to find target foobar" 00:11:54.236 }' 00:11:54.236 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:11:54.236 { 00:11:54.236 "nqn": "nqn.2016-06.io.spdk:cnode28040", 00:11:54.236 "tgt_name": "foobar", 00:11:54.236 "method": "nvmf_create_subsystem", 00:11:54.236 "req_id": 1 00:11:54.236 } 00:11:54.236 Got JSON-RPC error response 00:11:54.236 response: 00:11:54.236 { 00:11:54.236 "code": -32603, 00:11:54.236 "message": "Unable to find target foobar" 00:11:54.236 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:54.236 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:54.236 10:40:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode23550 00:11:54.494 [2024-11-07 10:40:22.044011] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23550: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:54.494 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:11:54.494 { 00:11:54.494 "nqn": "nqn.2016-06.io.spdk:cnode23550", 00:11:54.494 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:54.494 "method": "nvmf_create_subsystem", 00:11:54.494 "req_id": 1 00:11:54.494 } 00:11:54.494 Got JSON-RPC error response 00:11:54.494 response: 00:11:54.494 { 00:11:54.494 "code": -32602, 00:11:54.494 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:54.494 }' 00:11:54.494 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:11:54.494 { 00:11:54.494 "nqn": "nqn.2016-06.io.spdk:cnode23550", 00:11:54.494 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:54.494 "method": "nvmf_create_subsystem", 00:11:54.494 "req_id": 1 00:11:54.494 } 00:11:54.494 Got JSON-RPC error response 00:11:54.494 response: 00:11:54.494 { 00:11:54.494 "code": -32602, 00:11:54.494 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:54.494 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:54.494 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:54.494 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode16523 00:11:54.753 [2024-11-07 10:40:22.232588] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16523: invalid model number 'SPDK_Controller' 00:11:54.753 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:11:54.753 { 00:11:54.753 "nqn": "nqn.2016-06.io.spdk:cnode16523", 00:11:54.753 "model_number": "SPDK_Controller\u001f", 00:11:54.753 "method": "nvmf_create_subsystem", 00:11:54.753 "req_id": 1 00:11:54.753 } 00:11:54.753 Got JSON-RPC error response 00:11:54.753 response: 00:11:54.753 { 00:11:54.753 "code": -32602, 00:11:54.753 "message": "Invalid MN SPDK_Controller\u001f" 00:11:54.753 }' 00:11:54.753 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:11:54.753 { 00:11:54.753 "nqn": "nqn.2016-06.io.spdk:cnode16523", 00:11:54.753 "model_number": "SPDK_Controller\u001f", 00:11:54.753 "method": "nvmf_create_subsystem", 00:11:54.753 "req_id": 1 00:11:54.753 } 00:11:54.753 Got JSON-RPC error response 00:11:54.753 response: 00:11:54.753 { 00:11:54.753 "code": -32602, 00:11:54.753 "message": "Invalid MN SPDK_Controller\u001f" 00:11:54.753 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:54.753 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:11:54.753 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:11:54.753 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:54.753 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:54.753 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:54.753 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:54.753 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:54.753 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:11:54.753 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:11:54.753 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:11:54.753 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:54.753 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:54.753 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:54.754 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.014 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:11:55.014 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:11:55.014 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:11:55.014 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.014 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.014 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ P == \- ]] 00:11:55.014 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'PG ZkhuiJb3oWtO=1R8i2' 00:11:55.014 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'PG ZkhuiJb3oWtO=1R8i2' nqn.2016-06.io.spdk:cnode28196 00:11:55.014 [2024-11-07 10:40:22.609900] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28196: invalid serial number 'PG ZkhuiJb3oWtO=1R8i2' 00:11:55.014 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:11:55.014 { 00:11:55.014 "nqn": "nqn.2016-06.io.spdk:cnode28196", 00:11:55.014 "serial_number": "PG ZkhuiJb3oWtO=1R8i2", 00:11:55.014 "method": "nvmf_create_subsystem", 00:11:55.014 "req_id": 1 00:11:55.014 } 00:11:55.014 Got JSON-RPC error response 00:11:55.014 response: 00:11:55.014 { 00:11:55.014 "code": -32602, 00:11:55.014 "message": "Invalid SN PG ZkhuiJb3oWtO=1R8i2" 00:11:55.014 }' 00:11:55.014 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:11:55.014 { 00:11:55.014 "nqn": "nqn.2016-06.io.spdk:cnode28196", 00:11:55.014 "serial_number": "PG ZkhuiJb3oWtO=1R8i2", 00:11:55.014 "method": "nvmf_create_subsystem", 00:11:55.014 "req_id": 1 00:11:55.014 } 00:11:55.014 Got JSON-RPC error response 00:11:55.014 response: 00:11:55.014 { 00:11:55.014 "code": -32602, 00:11:55.014 "message": "Invalid SN PG ZkhuiJb3oWtO=1R8i2" 00:11:55.014 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:55.014 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:11:55.014 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:11:55.014 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:55.014 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:55.014 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:55.014 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:55.014 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.014 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:11:55.014 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:11:55.014 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:11:55.014 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.014 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.014 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:11:55.014 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:11:55.014 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:11:55.014 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.015 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.015 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:11:55.015 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:11:55.015 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:11:55.015 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.015 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.015 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:11:55.015 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:11:55.015 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:11:55.015 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.015 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.015 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:11:55.275 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.276 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:11:55.535 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:11:55.535 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:11:55.535 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.535 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.535 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:11:55.535 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:11:55.535 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:11:55.535 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.535 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.535 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:11:55.535 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:11:55.535 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:11:55.535 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.535 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.535 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ' == \- ]] 00:11:55.535 10:40:22 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ''\''&+saUK ver2_l ? ver1_l : ver2_l) )) 00:11:58.127 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:11:58.127 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:11:58.127 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:58.127 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:11:58.127 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:11:58.127 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:11:58.127 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:11:58.127 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:58.127 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:11:58.127 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:11:58.127 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:58.127 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:58.127 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:11:58.127 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:58.127 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:58.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.127 --rc genhtml_branch_coverage=1 00:11:58.127 --rc genhtml_function_coverage=1 00:11:58.127 --rc genhtml_legend=1 00:11:58.127 --rc geninfo_all_blocks=1 00:11:58.127 --rc geninfo_unexecuted_blocks=1 00:11:58.127 00:11:58.127 ' 00:11:58.127 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:58.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.127 --rc genhtml_branch_coverage=1 00:11:58.127 --rc genhtml_function_coverage=1 00:11:58.127 --rc genhtml_legend=1 00:11:58.127 --rc geninfo_all_blocks=1 00:11:58.127 --rc geninfo_unexecuted_blocks=1 00:11:58.127 00:11:58.127 ' 00:11:58.127 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:58.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.127 --rc genhtml_branch_coverage=1 00:11:58.127 --rc genhtml_function_coverage=1 00:11:58.127 --rc genhtml_legend=1 00:11:58.127 --rc geninfo_all_blocks=1 00:11:58.127 --rc geninfo_unexecuted_blocks=1 00:11:58.127 00:11:58.127 ' 00:11:58.127 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:58.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.127 --rc genhtml_branch_coverage=1 00:11:58.127 --rc genhtml_function_coverage=1 00:11:58.127 --rc genhtml_legend=1 00:11:58.127 --rc geninfo_all_blocks=1 00:11:58.127 --rc geninfo_unexecuted_blocks=1 00:11:58.127 00:11:58.127 ' 00:11:58.127 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:58.127 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:58.127 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:58.127 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:58.127 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:58.127 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:58.127 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:58.127 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:58.127 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:58.127 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:58.127 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:58.127 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:58.127 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:58.127 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:58.127 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:58.127 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:58.127 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:58.127 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:58.127 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:58.127 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:11:58.127 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:58.127 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:58.127 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:58.127 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.128 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.128 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.128 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:58.128 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.128 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:11:58.128 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:58.128 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:58.128 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:58.128 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:58.128 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:58.128 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:58.128 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:58.128 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:58.128 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:58.128 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:58.128 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:58.128 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:58.128 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:58.128 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:58.128 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:58.128 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:58.128 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.128 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:58.128 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.128 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:58.128 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:58.128 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:11:58.128 10:40:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:12:04.697 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:12:04.697 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:12:04.697 Found net devices under 0000:d9:00.0: mlx_0_0 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:12:04.697 Found net devices under 0000:d9:00.1: mlx_0_1 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # rdma_device_init 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # uname 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:04.697 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:04.698 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:04.698 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:12:04.698 altname enp217s0f0np0 00:12:04.698 altname ens818f0np0 00:12:04.698 inet 192.168.100.8/24 scope global mlx_0_0 00:12:04.698 valid_lft forever preferred_lft forever 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:04.698 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:04.698 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:12:04.698 altname enp217s0f1np1 00:12:04.698 altname ens818f1np1 00:12:04.698 inet 192.168.100.9/24 scope global mlx_0_1 00:12:04.698 valid_lft forever preferred_lft forever 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:04.698 192.168.100.9' 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:04.698 192.168.100.9' 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # head -n 1 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:04.698 192.168.100.9' 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # tail -n +2 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # head -n 1 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:04.698 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:04.699 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:04.699 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:04.699 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:04.699 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:04.699 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:04.699 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:04.699 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:04.699 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.958 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=3732717 00:12:04.958 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 3732717 00:12:04.958 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # '[' -z 3732717 ']' 00:12:04.958 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.958 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:04.958 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.958 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:04.958 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.958 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:04.958 [2024-11-07 10:40:32.419106] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:12:04.958 [2024-11-07 10:40:32.419156] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:04.958 [2024-11-07 10:40:32.493539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:04.958 [2024-11-07 10:40:32.532736] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:04.958 [2024-11-07 10:40:32.532773] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:04.958 [2024-11-07 10:40:32.532783] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:04.958 [2024-11-07 10:40:32.532791] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:04.958 [2024-11-07 10:40:32.532798] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:04.958 [2024-11-07 10:40:32.534248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:04.958 [2024-11-07 10:40:32.534332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:04.958 [2024-11-07 10:40:32.534334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:04.958 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:04.958 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@866 -- # return 0 00:12:04.958 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:04.958 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:04.958 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:05.217 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:05.217 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:05.217 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.217 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:05.217 [2024-11-07 10:40:32.699151] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x159e570/0x15a2a60) succeed. 00:12:05.217 [2024-11-07 10:40:32.708434] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x159fb60/0x15e4100) succeed. 00:12:05.217 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.217 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:05.217 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.218 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:05.218 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.218 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:05.218 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.218 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:05.218 [2024-11-07 10:40:32.819555] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:05.218 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.218 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:05.218 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.218 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:05.218 NULL1 00:12:05.218 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.218 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3732739 00:12:05.218 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:05.218 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:05.218 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:05.218 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:05.218 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:05.218 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:05.218 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:05.218 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:05.218 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:05.218 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:05.218 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:05.218 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:05.218 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:05.218 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:05.218 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:05.218 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:05.218 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:05.218 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:05.218 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:05.218 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:05.218 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:05.218 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:05.477 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:05.477 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:05.477 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:05.477 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:05.477 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:05.477 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:05.477 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:05.477 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:05.477 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:05.477 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:05.477 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:05.477 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:05.477 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:05.477 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:05.477 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:05.477 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:05.477 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:05.477 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:05.477 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:05.477 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:05.477 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:05.477 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:05.477 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3732739 00:12:05.477 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:05.477 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.477 10:40:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:05.736 10:40:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.736 10:40:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3732739 00:12:05.736 10:40:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:05.736 10:40:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.736 10:40:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:05.995 10:40:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.995 10:40:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3732739 00:12:05.995 10:40:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:05.995 10:40:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.995 10:40:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.254 10:40:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.254 10:40:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3732739 00:12:06.254 10:40:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:06.254 10:40:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.254 10:40:33 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.822 10:40:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.822 10:40:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3732739 00:12:06.822 10:40:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:06.822 10:40:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.822 10:40:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.081 10:40:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.081 10:40:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3732739 00:12:07.081 10:40:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:07.081 10:40:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.081 10:40:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.340 10:40:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.340 10:40:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3732739 00:12:07.340 10:40:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:07.340 10:40:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.340 10:40:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.598 10:40:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.598 10:40:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3732739 00:12:07.598 10:40:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:07.598 10:40:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.598 10:40:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.166 10:40:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.166 10:40:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3732739 00:12:08.166 10:40:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.166 10:40:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.166 10:40:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.424 10:40:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.424 10:40:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3732739 00:12:08.424 10:40:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.424 10:40:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.424 10:40:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.683 10:40:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.683 10:40:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3732739 00:12:08.683 10:40:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.683 10:40:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.683 10:40:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.942 10:40:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.942 10:40:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3732739 00:12:08.942 10:40:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.942 10:40:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.942 10:40:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:09.201 10:40:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.201 10:40:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3732739 00:12:09.201 10:40:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:09.201 10:40:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.201 10:40:36 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:09.768 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.768 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3732739 00:12:09.768 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:09.768 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.768 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:10.026 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.026 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3732739 00:12:10.026 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:10.026 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.026 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:10.285 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.285 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3732739 00:12:10.285 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:10.285 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.285 10:40:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:10.543 10:40:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.543 10:40:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3732739 00:12:10.543 10:40:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:10.543 10:40:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.543 10:40:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.111 10:40:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.111 10:40:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3732739 00:12:11.111 10:40:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.111 10:40:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.111 10:40:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.370 10:40:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.370 10:40:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3732739 00:12:11.370 10:40:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.370 10:40:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.370 10:40:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.629 10:40:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.629 10:40:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3732739 00:12:11.629 10:40:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.629 10:40:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.629 10:40:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.887 10:40:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.887 10:40:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3732739 00:12:11.887 10:40:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.887 10:40:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.887 10:40:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.161 10:40:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.161 10:40:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3732739 00:12:12.161 10:40:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.161 10:40:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.161 10:40:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.728 10:40:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.728 10:40:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3732739 00:12:12.728 10:40:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.728 10:40:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.728 10:40:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.987 10:40:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.987 10:40:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3732739 00:12:12.987 10:40:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.987 10:40:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.987 10:40:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.246 10:40:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.246 10:40:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3732739 00:12:13.246 10:40:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.246 10:40:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.246 10:40:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.505 10:40:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.505 10:40:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3732739 00:12:13.505 10:40:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.505 10:40:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.505 10:40:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.072 10:40:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.072 10:40:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3732739 00:12:14.072 10:40:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.072 10:40:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.072 10:40:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.330 10:40:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.330 10:40:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3732739 00:12:14.330 10:40:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.330 10:40:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.330 10:40:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.589 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.589 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3732739 00:12:14.589 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.589 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.589 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.848 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.848 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3732739 00:12:14.848 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.848 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.848 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:15.105 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.105 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3732739 00:12:15.105 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:15.105 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.105 10:40:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:15.671 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:12:15.671 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.671 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3732739 00:12:15.671 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3732739) - No such process 00:12:15.671 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3732739 00:12:15.671 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:15.671 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:15.671 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:15.671 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:15.671 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:12:15.671 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:15.671 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:15.671 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:12:15.671 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:15.671 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:15.671 rmmod nvme_rdma 00:12:15.671 rmmod nvme_fabrics 00:12:15.671 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:15.671 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:12:15.671 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:12:15.671 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 3732717 ']' 00:12:15.671 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 3732717 00:12:15.671 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' -z 3732717 ']' 00:12:15.671 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # kill -0 3732717 00:12:15.671 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # uname 00:12:15.671 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:15.671 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3732717 00:12:15.671 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:12:15.671 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:12:15.671 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3732717' 00:12:15.671 killing process with pid 3732717 00:12:15.671 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@971 -- # kill 3732717 00:12:15.671 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@976 -- # wait 3732717 00:12:15.930 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:15.930 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:15.930 00:12:15.930 real 0m17.974s 00:12:15.930 user 0m39.966s 00:12:15.930 sys 0m7.668s 00:12:15.930 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:15.930 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:15.930 ************************************ 00:12:15.930 END TEST nvmf_connect_stress 00:12:15.930 ************************************ 00:12:15.930 10:40:43 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:12:15.930 10:40:43 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:15.930 10:40:43 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:15.930 10:40:43 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:15.930 ************************************ 00:12:15.930 START TEST nvmf_fused_ordering 00:12:15.930 ************************************ 00:12:15.930 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:12:15.930 * Looking for test storage... 00:12:15.930 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:15.930 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:15.930 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:12:15.930 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:16.190 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:16.190 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:16.190 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:16.190 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:16.190 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:12:16.190 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:12:16.190 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:12:16.190 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:12:16.190 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:12:16.190 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:12:16.190 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:12:16.190 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:16.190 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:12:16.190 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:12:16.190 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:16.190 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:16.190 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:12:16.190 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:12:16.190 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:16.190 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:12:16.190 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:12:16.190 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:12:16.190 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:12:16.190 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:16.190 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:12:16.190 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:12:16.190 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:16.190 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:16.190 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:12:16.190 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:16.190 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:16.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.190 --rc genhtml_branch_coverage=1 00:12:16.190 --rc genhtml_function_coverage=1 00:12:16.190 --rc genhtml_legend=1 00:12:16.190 --rc geninfo_all_blocks=1 00:12:16.190 --rc geninfo_unexecuted_blocks=1 00:12:16.190 00:12:16.190 ' 00:12:16.190 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:16.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.190 --rc genhtml_branch_coverage=1 00:12:16.190 --rc genhtml_function_coverage=1 00:12:16.190 --rc genhtml_legend=1 00:12:16.190 --rc geninfo_all_blocks=1 00:12:16.190 --rc geninfo_unexecuted_blocks=1 00:12:16.190 00:12:16.190 ' 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:16.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.191 --rc genhtml_branch_coverage=1 00:12:16.191 --rc genhtml_function_coverage=1 00:12:16.191 --rc genhtml_legend=1 00:12:16.191 --rc geninfo_all_blocks=1 00:12:16.191 --rc geninfo_unexecuted_blocks=1 00:12:16.191 00:12:16.191 ' 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:16.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.191 --rc genhtml_branch_coverage=1 00:12:16.191 --rc genhtml_function_coverage=1 00:12:16.191 --rc genhtml_legend=1 00:12:16.191 --rc geninfo_all_blocks=1 00:12:16.191 --rc geninfo_unexecuted_blocks=1 00:12:16.191 00:12:16.191 ' 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:16.191 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:12:16.191 10:40:43 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:22.760 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:22.760 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:12:22.760 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:12:22.761 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:12:22.761 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:12:22.761 Found net devices under 0000:d9:00.0: mlx_0_0 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:12:22.761 Found net devices under 0000:d9:00.1: mlx_0_1 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # rdma_device_init 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # uname 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:22.761 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:22.762 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:22.762 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:22.762 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:12:22.762 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:22.762 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:22.762 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:22.762 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:22.762 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:22.762 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:22.762 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:12:22.762 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:22.762 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:22.762 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:22.762 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:22.762 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:22.762 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:22.762 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:22.762 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:22.762 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:22.762 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:22.762 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:12:22.762 altname enp217s0f0np0 00:12:22.762 altname ens818f0np0 00:12:22.762 inet 192.168.100.8/24 scope global mlx_0_0 00:12:22.762 valid_lft forever preferred_lft forever 00:12:22.762 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:22.762 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:22.762 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:22.762 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:22.762 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:22.762 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:22.762 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:22.762 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:22.762 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:22.762 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:22.762 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:12:22.762 altname enp217s0f1np1 00:12:22.762 altname ens818f1np1 00:12:22.762 inet 192.168.100.9/24 scope global mlx_0_1 00:12:22.762 valid_lft forever preferred_lft forever 00:12:22.762 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:12:22.762 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:22.762 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:22.762 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:22.762 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:22.762 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:22.762 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:22.762 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:22.762 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:22.762 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:22.762 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:22.762 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:23.022 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:23.022 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:23.022 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:23.022 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:12:23.022 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:23.022 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:23.022 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:23.022 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:23.022 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:23.022 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:23.022 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:12:23.022 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:23.022 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:23.022 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:23.022 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:23.022 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:23.022 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:23.022 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:23.022 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:23.022 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:23.022 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:23.022 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:23.022 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:23.022 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:23.022 192.168.100.9' 00:12:23.022 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:23.022 192.168.100.9' 00:12:23.022 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # head -n 1 00:12:23.022 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:23.022 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:23.022 192.168.100.9' 00:12:23.022 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # head -n 1 00:12:23.022 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # tail -n +2 00:12:23.022 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:23.022 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:23.022 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:23.022 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:23.022 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:23.022 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:23.022 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:23.022 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:23.022 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:23.022 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:23.022 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=3737734 00:12:23.022 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:23.022 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 3737734 00:12:23.022 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # '[' -z 3737734 ']' 00:12:23.022 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.022 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:23.022 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.022 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:23.022 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:23.022 [2024-11-07 10:40:50.565212] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:12:23.022 [2024-11-07 10:40:50.565262] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:23.022 [2024-11-07 10:40:50.641441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.022 [2024-11-07 10:40:50.679556] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:23.022 [2024-11-07 10:40:50.679592] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:23.022 [2024-11-07 10:40:50.679602] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:23.022 [2024-11-07 10:40:50.679610] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:23.022 [2024-11-07 10:40:50.679617] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:23.022 [2024-11-07 10:40:50.680225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:23.281 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:23.281 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@866 -- # return 0 00:12:23.281 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:23.281 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:23.281 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:23.281 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:23.281 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:23.281 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.281 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:23.281 [2024-11-07 10:40:50.845176] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x169fea0/0x16a4390) succeed. 00:12:23.281 [2024-11-07 10:40:50.854063] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x16a1350/0x16e5a30) succeed. 00:12:23.281 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.281 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:23.281 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.281 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:23.281 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.281 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:23.281 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.281 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:23.281 [2024-11-07 10:40:50.900215] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:23.281 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.281 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:23.281 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.281 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:23.281 NULL1 00:12:23.281 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.281 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:23.281 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.281 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:23.281 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.281 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:23.281 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.281 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:23.281 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.281 10:40:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:23.540 [2024-11-07 10:40:50.956978] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:12:23.540 [2024-11-07 10:40:50.957015] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3737945 ] 00:12:23.540 Attached to nqn.2016-06.io.spdk:cnode1 00:12:23.540 Namespace ID: 1 size: 1GB 00:12:23.540 fused_ordering(0) 00:12:23.540 fused_ordering(1) 00:12:23.540 fused_ordering(2) 00:12:23.540 fused_ordering(3) 00:12:23.540 fused_ordering(4) 00:12:23.540 fused_ordering(5) 00:12:23.540 fused_ordering(6) 00:12:23.540 fused_ordering(7) 00:12:23.540 fused_ordering(8) 00:12:23.540 fused_ordering(9) 00:12:23.540 fused_ordering(10) 00:12:23.540 fused_ordering(11) 00:12:23.540 fused_ordering(12) 00:12:23.540 fused_ordering(13) 00:12:23.540 fused_ordering(14) 00:12:23.540 fused_ordering(15) 00:12:23.540 fused_ordering(16) 00:12:23.540 fused_ordering(17) 00:12:23.540 fused_ordering(18) 00:12:23.540 fused_ordering(19) 00:12:23.540 fused_ordering(20) 00:12:23.540 fused_ordering(21) 00:12:23.540 fused_ordering(22) 00:12:23.540 fused_ordering(23) 00:12:23.540 fused_ordering(24) 00:12:23.540 fused_ordering(25) 00:12:23.540 fused_ordering(26) 00:12:23.540 fused_ordering(27) 00:12:23.540 fused_ordering(28) 00:12:23.540 fused_ordering(29) 00:12:23.540 fused_ordering(30) 00:12:23.540 fused_ordering(31) 00:12:23.540 fused_ordering(32) 00:12:23.540 fused_ordering(33) 00:12:23.540 fused_ordering(34) 00:12:23.540 fused_ordering(35) 00:12:23.540 fused_ordering(36) 00:12:23.540 fused_ordering(37) 00:12:23.540 fused_ordering(38) 00:12:23.540 fused_ordering(39) 00:12:23.540 fused_ordering(40) 00:12:23.540 fused_ordering(41) 00:12:23.540 fused_ordering(42) 00:12:23.540 fused_ordering(43) 00:12:23.540 fused_ordering(44) 00:12:23.540 fused_ordering(45) 00:12:23.540 fused_ordering(46) 00:12:23.540 fused_ordering(47) 00:12:23.540 fused_ordering(48) 00:12:23.540 fused_ordering(49) 00:12:23.540 fused_ordering(50) 00:12:23.540 fused_ordering(51) 00:12:23.540 fused_ordering(52) 00:12:23.540 fused_ordering(53) 00:12:23.540 fused_ordering(54) 00:12:23.540 fused_ordering(55) 00:12:23.540 fused_ordering(56) 00:12:23.540 fused_ordering(57) 00:12:23.540 fused_ordering(58) 00:12:23.540 fused_ordering(59) 00:12:23.540 fused_ordering(60) 00:12:23.540 fused_ordering(61) 00:12:23.540 fused_ordering(62) 00:12:23.540 fused_ordering(63) 00:12:23.540 fused_ordering(64) 00:12:23.540 fused_ordering(65) 00:12:23.540 fused_ordering(66) 00:12:23.540 fused_ordering(67) 00:12:23.540 fused_ordering(68) 00:12:23.540 fused_ordering(69) 00:12:23.540 fused_ordering(70) 00:12:23.540 fused_ordering(71) 00:12:23.540 fused_ordering(72) 00:12:23.540 fused_ordering(73) 00:12:23.540 fused_ordering(74) 00:12:23.540 fused_ordering(75) 00:12:23.540 fused_ordering(76) 00:12:23.540 fused_ordering(77) 00:12:23.540 fused_ordering(78) 00:12:23.540 fused_ordering(79) 00:12:23.540 fused_ordering(80) 00:12:23.540 fused_ordering(81) 00:12:23.540 fused_ordering(82) 00:12:23.540 fused_ordering(83) 00:12:23.540 fused_ordering(84) 00:12:23.540 fused_ordering(85) 00:12:23.540 fused_ordering(86) 00:12:23.540 fused_ordering(87) 00:12:23.540 fused_ordering(88) 00:12:23.540 fused_ordering(89) 00:12:23.540 fused_ordering(90) 00:12:23.540 fused_ordering(91) 00:12:23.540 fused_ordering(92) 00:12:23.540 fused_ordering(93) 00:12:23.540 fused_ordering(94) 00:12:23.540 fused_ordering(95) 00:12:23.540 fused_ordering(96) 00:12:23.540 fused_ordering(97) 00:12:23.540 fused_ordering(98) 00:12:23.540 fused_ordering(99) 00:12:23.540 fused_ordering(100) 00:12:23.540 fused_ordering(101) 00:12:23.540 fused_ordering(102) 00:12:23.540 fused_ordering(103) 00:12:23.540 fused_ordering(104) 00:12:23.540 fused_ordering(105) 00:12:23.540 fused_ordering(106) 00:12:23.540 fused_ordering(107) 00:12:23.540 fused_ordering(108) 00:12:23.540 fused_ordering(109) 00:12:23.540 fused_ordering(110) 00:12:23.540 fused_ordering(111) 00:12:23.540 fused_ordering(112) 00:12:23.540 fused_ordering(113) 00:12:23.540 fused_ordering(114) 00:12:23.540 fused_ordering(115) 00:12:23.540 fused_ordering(116) 00:12:23.540 fused_ordering(117) 00:12:23.540 fused_ordering(118) 00:12:23.540 fused_ordering(119) 00:12:23.540 fused_ordering(120) 00:12:23.540 fused_ordering(121) 00:12:23.540 fused_ordering(122) 00:12:23.540 fused_ordering(123) 00:12:23.540 fused_ordering(124) 00:12:23.540 fused_ordering(125) 00:12:23.540 fused_ordering(126) 00:12:23.540 fused_ordering(127) 00:12:23.540 fused_ordering(128) 00:12:23.540 fused_ordering(129) 00:12:23.540 fused_ordering(130) 00:12:23.540 fused_ordering(131) 00:12:23.540 fused_ordering(132) 00:12:23.540 fused_ordering(133) 00:12:23.540 fused_ordering(134) 00:12:23.540 fused_ordering(135) 00:12:23.540 fused_ordering(136) 00:12:23.540 fused_ordering(137) 00:12:23.540 fused_ordering(138) 00:12:23.540 fused_ordering(139) 00:12:23.540 fused_ordering(140) 00:12:23.540 fused_ordering(141) 00:12:23.540 fused_ordering(142) 00:12:23.540 fused_ordering(143) 00:12:23.540 fused_ordering(144) 00:12:23.541 fused_ordering(145) 00:12:23.541 fused_ordering(146) 00:12:23.541 fused_ordering(147) 00:12:23.541 fused_ordering(148) 00:12:23.541 fused_ordering(149) 00:12:23.541 fused_ordering(150) 00:12:23.541 fused_ordering(151) 00:12:23.541 fused_ordering(152) 00:12:23.541 fused_ordering(153) 00:12:23.541 fused_ordering(154) 00:12:23.541 fused_ordering(155) 00:12:23.541 fused_ordering(156) 00:12:23.541 fused_ordering(157) 00:12:23.541 fused_ordering(158) 00:12:23.541 fused_ordering(159) 00:12:23.541 fused_ordering(160) 00:12:23.541 fused_ordering(161) 00:12:23.541 fused_ordering(162) 00:12:23.541 fused_ordering(163) 00:12:23.541 fused_ordering(164) 00:12:23.541 fused_ordering(165) 00:12:23.541 fused_ordering(166) 00:12:23.541 fused_ordering(167) 00:12:23.541 fused_ordering(168) 00:12:23.541 fused_ordering(169) 00:12:23.541 fused_ordering(170) 00:12:23.541 fused_ordering(171) 00:12:23.541 fused_ordering(172) 00:12:23.541 fused_ordering(173) 00:12:23.541 fused_ordering(174) 00:12:23.541 fused_ordering(175) 00:12:23.541 fused_ordering(176) 00:12:23.541 fused_ordering(177) 00:12:23.541 fused_ordering(178) 00:12:23.541 fused_ordering(179) 00:12:23.541 fused_ordering(180) 00:12:23.541 fused_ordering(181) 00:12:23.541 fused_ordering(182) 00:12:23.541 fused_ordering(183) 00:12:23.541 fused_ordering(184) 00:12:23.541 fused_ordering(185) 00:12:23.541 fused_ordering(186) 00:12:23.541 fused_ordering(187) 00:12:23.541 fused_ordering(188) 00:12:23.541 fused_ordering(189) 00:12:23.541 fused_ordering(190) 00:12:23.541 fused_ordering(191) 00:12:23.541 fused_ordering(192) 00:12:23.541 fused_ordering(193) 00:12:23.541 fused_ordering(194) 00:12:23.541 fused_ordering(195) 00:12:23.541 fused_ordering(196) 00:12:23.541 fused_ordering(197) 00:12:23.541 fused_ordering(198) 00:12:23.541 fused_ordering(199) 00:12:23.541 fused_ordering(200) 00:12:23.541 fused_ordering(201) 00:12:23.541 fused_ordering(202) 00:12:23.541 fused_ordering(203) 00:12:23.541 fused_ordering(204) 00:12:23.541 fused_ordering(205) 00:12:23.800 fused_ordering(206) 00:12:23.800 fused_ordering(207) 00:12:23.800 fused_ordering(208) 00:12:23.800 fused_ordering(209) 00:12:23.800 fused_ordering(210) 00:12:23.800 fused_ordering(211) 00:12:23.800 fused_ordering(212) 00:12:23.800 fused_ordering(213) 00:12:23.800 fused_ordering(214) 00:12:23.800 fused_ordering(215) 00:12:23.800 fused_ordering(216) 00:12:23.800 fused_ordering(217) 00:12:23.800 fused_ordering(218) 00:12:23.800 fused_ordering(219) 00:12:23.800 fused_ordering(220) 00:12:23.800 fused_ordering(221) 00:12:23.800 fused_ordering(222) 00:12:23.800 fused_ordering(223) 00:12:23.800 fused_ordering(224) 00:12:23.800 fused_ordering(225) 00:12:23.800 fused_ordering(226) 00:12:23.800 fused_ordering(227) 00:12:23.800 fused_ordering(228) 00:12:23.800 fused_ordering(229) 00:12:23.800 fused_ordering(230) 00:12:23.800 fused_ordering(231) 00:12:23.800 fused_ordering(232) 00:12:23.800 fused_ordering(233) 00:12:23.800 fused_ordering(234) 00:12:23.800 fused_ordering(235) 00:12:23.800 fused_ordering(236) 00:12:23.800 fused_ordering(237) 00:12:23.800 fused_ordering(238) 00:12:23.800 fused_ordering(239) 00:12:23.800 fused_ordering(240) 00:12:23.800 fused_ordering(241) 00:12:23.800 fused_ordering(242) 00:12:23.800 fused_ordering(243) 00:12:23.800 fused_ordering(244) 00:12:23.800 fused_ordering(245) 00:12:23.800 fused_ordering(246) 00:12:23.800 fused_ordering(247) 00:12:23.800 fused_ordering(248) 00:12:23.800 fused_ordering(249) 00:12:23.800 fused_ordering(250) 00:12:23.800 fused_ordering(251) 00:12:23.800 fused_ordering(252) 00:12:23.800 fused_ordering(253) 00:12:23.800 fused_ordering(254) 00:12:23.800 fused_ordering(255) 00:12:23.800 fused_ordering(256) 00:12:23.800 fused_ordering(257) 00:12:23.800 fused_ordering(258) 00:12:23.800 fused_ordering(259) 00:12:23.800 fused_ordering(260) 00:12:23.800 fused_ordering(261) 00:12:23.800 fused_ordering(262) 00:12:23.800 fused_ordering(263) 00:12:23.800 fused_ordering(264) 00:12:23.800 fused_ordering(265) 00:12:23.800 fused_ordering(266) 00:12:23.800 fused_ordering(267) 00:12:23.800 fused_ordering(268) 00:12:23.800 fused_ordering(269) 00:12:23.800 fused_ordering(270) 00:12:23.800 fused_ordering(271) 00:12:23.800 fused_ordering(272) 00:12:23.800 fused_ordering(273) 00:12:23.800 fused_ordering(274) 00:12:23.800 fused_ordering(275) 00:12:23.800 fused_ordering(276) 00:12:23.800 fused_ordering(277) 00:12:23.800 fused_ordering(278) 00:12:23.801 fused_ordering(279) 00:12:23.801 fused_ordering(280) 00:12:23.801 fused_ordering(281) 00:12:23.801 fused_ordering(282) 00:12:23.801 fused_ordering(283) 00:12:23.801 fused_ordering(284) 00:12:23.801 fused_ordering(285) 00:12:23.801 fused_ordering(286) 00:12:23.801 fused_ordering(287) 00:12:23.801 fused_ordering(288) 00:12:23.801 fused_ordering(289) 00:12:23.801 fused_ordering(290) 00:12:23.801 fused_ordering(291) 00:12:23.801 fused_ordering(292) 00:12:23.801 fused_ordering(293) 00:12:23.801 fused_ordering(294) 00:12:23.801 fused_ordering(295) 00:12:23.801 fused_ordering(296) 00:12:23.801 fused_ordering(297) 00:12:23.801 fused_ordering(298) 00:12:23.801 fused_ordering(299) 00:12:23.801 fused_ordering(300) 00:12:23.801 fused_ordering(301) 00:12:23.801 fused_ordering(302) 00:12:23.801 fused_ordering(303) 00:12:23.801 fused_ordering(304) 00:12:23.801 fused_ordering(305) 00:12:23.801 fused_ordering(306) 00:12:23.801 fused_ordering(307) 00:12:23.801 fused_ordering(308) 00:12:23.801 fused_ordering(309) 00:12:23.801 fused_ordering(310) 00:12:23.801 fused_ordering(311) 00:12:23.801 fused_ordering(312) 00:12:23.801 fused_ordering(313) 00:12:23.801 fused_ordering(314) 00:12:23.801 fused_ordering(315) 00:12:23.801 fused_ordering(316) 00:12:23.801 fused_ordering(317) 00:12:23.801 fused_ordering(318) 00:12:23.801 fused_ordering(319) 00:12:23.801 fused_ordering(320) 00:12:23.801 fused_ordering(321) 00:12:23.801 fused_ordering(322) 00:12:23.801 fused_ordering(323) 00:12:23.801 fused_ordering(324) 00:12:23.801 fused_ordering(325) 00:12:23.801 fused_ordering(326) 00:12:23.801 fused_ordering(327) 00:12:23.801 fused_ordering(328) 00:12:23.801 fused_ordering(329) 00:12:23.801 fused_ordering(330) 00:12:23.801 fused_ordering(331) 00:12:23.801 fused_ordering(332) 00:12:23.801 fused_ordering(333) 00:12:23.801 fused_ordering(334) 00:12:23.801 fused_ordering(335) 00:12:23.801 fused_ordering(336) 00:12:23.801 fused_ordering(337) 00:12:23.801 fused_ordering(338) 00:12:23.801 fused_ordering(339) 00:12:23.801 fused_ordering(340) 00:12:23.801 fused_ordering(341) 00:12:23.801 fused_ordering(342) 00:12:23.801 fused_ordering(343) 00:12:23.801 fused_ordering(344) 00:12:23.801 fused_ordering(345) 00:12:23.801 fused_ordering(346) 00:12:23.801 fused_ordering(347) 00:12:23.801 fused_ordering(348) 00:12:23.801 fused_ordering(349) 00:12:23.801 fused_ordering(350) 00:12:23.801 fused_ordering(351) 00:12:23.801 fused_ordering(352) 00:12:23.801 fused_ordering(353) 00:12:23.801 fused_ordering(354) 00:12:23.801 fused_ordering(355) 00:12:23.801 fused_ordering(356) 00:12:23.801 fused_ordering(357) 00:12:23.801 fused_ordering(358) 00:12:23.801 fused_ordering(359) 00:12:23.801 fused_ordering(360) 00:12:23.801 fused_ordering(361) 00:12:23.801 fused_ordering(362) 00:12:23.801 fused_ordering(363) 00:12:23.801 fused_ordering(364) 00:12:23.801 fused_ordering(365) 00:12:23.801 fused_ordering(366) 00:12:23.801 fused_ordering(367) 00:12:23.801 fused_ordering(368) 00:12:23.801 fused_ordering(369) 00:12:23.801 fused_ordering(370) 00:12:23.801 fused_ordering(371) 00:12:23.801 fused_ordering(372) 00:12:23.801 fused_ordering(373) 00:12:23.801 fused_ordering(374) 00:12:23.801 fused_ordering(375) 00:12:23.801 fused_ordering(376) 00:12:23.801 fused_ordering(377) 00:12:23.801 fused_ordering(378) 00:12:23.801 fused_ordering(379) 00:12:23.801 fused_ordering(380) 00:12:23.801 fused_ordering(381) 00:12:23.801 fused_ordering(382) 00:12:23.801 fused_ordering(383) 00:12:23.801 fused_ordering(384) 00:12:23.801 fused_ordering(385) 00:12:23.801 fused_ordering(386) 00:12:23.801 fused_ordering(387) 00:12:23.801 fused_ordering(388) 00:12:23.801 fused_ordering(389) 00:12:23.801 fused_ordering(390) 00:12:23.801 fused_ordering(391) 00:12:23.801 fused_ordering(392) 00:12:23.801 fused_ordering(393) 00:12:23.801 fused_ordering(394) 00:12:23.801 fused_ordering(395) 00:12:23.801 fused_ordering(396) 00:12:23.801 fused_ordering(397) 00:12:23.801 fused_ordering(398) 00:12:23.801 fused_ordering(399) 00:12:23.801 fused_ordering(400) 00:12:23.801 fused_ordering(401) 00:12:23.801 fused_ordering(402) 00:12:23.801 fused_ordering(403) 00:12:23.801 fused_ordering(404) 00:12:23.801 fused_ordering(405) 00:12:23.801 fused_ordering(406) 00:12:23.801 fused_ordering(407) 00:12:23.801 fused_ordering(408) 00:12:23.801 fused_ordering(409) 00:12:23.801 fused_ordering(410) 00:12:23.801 fused_ordering(411) 00:12:23.801 fused_ordering(412) 00:12:23.801 fused_ordering(413) 00:12:23.801 fused_ordering(414) 00:12:23.801 fused_ordering(415) 00:12:23.801 fused_ordering(416) 00:12:23.801 fused_ordering(417) 00:12:23.801 fused_ordering(418) 00:12:23.801 fused_ordering(419) 00:12:23.801 fused_ordering(420) 00:12:23.801 fused_ordering(421) 00:12:23.801 fused_ordering(422) 00:12:23.801 fused_ordering(423) 00:12:23.801 fused_ordering(424) 00:12:23.801 fused_ordering(425) 00:12:23.801 fused_ordering(426) 00:12:23.801 fused_ordering(427) 00:12:23.801 fused_ordering(428) 00:12:23.801 fused_ordering(429) 00:12:23.801 fused_ordering(430) 00:12:23.801 fused_ordering(431) 00:12:23.801 fused_ordering(432) 00:12:23.801 fused_ordering(433) 00:12:23.801 fused_ordering(434) 00:12:23.801 fused_ordering(435) 00:12:23.801 fused_ordering(436) 00:12:23.801 fused_ordering(437) 00:12:23.801 fused_ordering(438) 00:12:23.801 fused_ordering(439) 00:12:23.801 fused_ordering(440) 00:12:23.801 fused_ordering(441) 00:12:23.801 fused_ordering(442) 00:12:23.801 fused_ordering(443) 00:12:23.801 fused_ordering(444) 00:12:23.801 fused_ordering(445) 00:12:23.801 fused_ordering(446) 00:12:23.801 fused_ordering(447) 00:12:23.801 fused_ordering(448) 00:12:23.801 fused_ordering(449) 00:12:23.801 fused_ordering(450) 00:12:23.801 fused_ordering(451) 00:12:23.801 fused_ordering(452) 00:12:23.801 fused_ordering(453) 00:12:23.801 fused_ordering(454) 00:12:23.801 fused_ordering(455) 00:12:23.801 fused_ordering(456) 00:12:23.801 fused_ordering(457) 00:12:23.801 fused_ordering(458) 00:12:23.801 fused_ordering(459) 00:12:23.801 fused_ordering(460) 00:12:23.801 fused_ordering(461) 00:12:23.801 fused_ordering(462) 00:12:23.801 fused_ordering(463) 00:12:23.801 fused_ordering(464) 00:12:23.801 fused_ordering(465) 00:12:23.801 fused_ordering(466) 00:12:23.801 fused_ordering(467) 00:12:23.801 fused_ordering(468) 00:12:23.801 fused_ordering(469) 00:12:23.801 fused_ordering(470) 00:12:23.801 fused_ordering(471) 00:12:23.801 fused_ordering(472) 00:12:23.801 fused_ordering(473) 00:12:23.801 fused_ordering(474) 00:12:23.801 fused_ordering(475) 00:12:23.801 fused_ordering(476) 00:12:23.801 fused_ordering(477) 00:12:23.801 fused_ordering(478) 00:12:23.801 fused_ordering(479) 00:12:23.801 fused_ordering(480) 00:12:23.801 fused_ordering(481) 00:12:23.801 fused_ordering(482) 00:12:23.801 fused_ordering(483) 00:12:23.801 fused_ordering(484) 00:12:23.801 fused_ordering(485) 00:12:23.801 fused_ordering(486) 00:12:23.801 fused_ordering(487) 00:12:23.801 fused_ordering(488) 00:12:23.801 fused_ordering(489) 00:12:23.801 fused_ordering(490) 00:12:23.801 fused_ordering(491) 00:12:23.801 fused_ordering(492) 00:12:23.801 fused_ordering(493) 00:12:23.801 fused_ordering(494) 00:12:23.801 fused_ordering(495) 00:12:23.801 fused_ordering(496) 00:12:23.801 fused_ordering(497) 00:12:23.801 fused_ordering(498) 00:12:23.801 fused_ordering(499) 00:12:23.801 fused_ordering(500) 00:12:23.801 fused_ordering(501) 00:12:23.801 fused_ordering(502) 00:12:23.801 fused_ordering(503) 00:12:23.801 fused_ordering(504) 00:12:23.801 fused_ordering(505) 00:12:23.801 fused_ordering(506) 00:12:23.801 fused_ordering(507) 00:12:23.801 fused_ordering(508) 00:12:23.801 fused_ordering(509) 00:12:23.801 fused_ordering(510) 00:12:23.801 fused_ordering(511) 00:12:23.801 fused_ordering(512) 00:12:23.801 fused_ordering(513) 00:12:23.801 fused_ordering(514) 00:12:23.801 fused_ordering(515) 00:12:23.801 fused_ordering(516) 00:12:23.801 fused_ordering(517) 00:12:23.801 fused_ordering(518) 00:12:23.801 fused_ordering(519) 00:12:23.801 fused_ordering(520) 00:12:23.801 fused_ordering(521) 00:12:23.801 fused_ordering(522) 00:12:23.801 fused_ordering(523) 00:12:23.801 fused_ordering(524) 00:12:23.801 fused_ordering(525) 00:12:23.801 fused_ordering(526) 00:12:23.801 fused_ordering(527) 00:12:23.801 fused_ordering(528) 00:12:23.801 fused_ordering(529) 00:12:23.801 fused_ordering(530) 00:12:23.801 fused_ordering(531) 00:12:23.801 fused_ordering(532) 00:12:23.801 fused_ordering(533) 00:12:23.801 fused_ordering(534) 00:12:23.801 fused_ordering(535) 00:12:23.801 fused_ordering(536) 00:12:23.801 fused_ordering(537) 00:12:23.801 fused_ordering(538) 00:12:23.801 fused_ordering(539) 00:12:23.801 fused_ordering(540) 00:12:23.801 fused_ordering(541) 00:12:23.801 fused_ordering(542) 00:12:23.801 fused_ordering(543) 00:12:23.801 fused_ordering(544) 00:12:23.801 fused_ordering(545) 00:12:23.801 fused_ordering(546) 00:12:23.801 fused_ordering(547) 00:12:23.801 fused_ordering(548) 00:12:23.801 fused_ordering(549) 00:12:23.801 fused_ordering(550) 00:12:23.801 fused_ordering(551) 00:12:23.802 fused_ordering(552) 00:12:23.802 fused_ordering(553) 00:12:23.802 fused_ordering(554) 00:12:23.802 fused_ordering(555) 00:12:23.802 fused_ordering(556) 00:12:23.802 fused_ordering(557) 00:12:23.802 fused_ordering(558) 00:12:23.802 fused_ordering(559) 00:12:23.802 fused_ordering(560) 00:12:23.802 fused_ordering(561) 00:12:23.802 fused_ordering(562) 00:12:23.802 fused_ordering(563) 00:12:23.802 fused_ordering(564) 00:12:23.802 fused_ordering(565) 00:12:23.802 fused_ordering(566) 00:12:23.802 fused_ordering(567) 00:12:23.802 fused_ordering(568) 00:12:23.802 fused_ordering(569) 00:12:23.802 fused_ordering(570) 00:12:23.802 fused_ordering(571) 00:12:23.802 fused_ordering(572) 00:12:23.802 fused_ordering(573) 00:12:23.802 fused_ordering(574) 00:12:23.802 fused_ordering(575) 00:12:23.802 fused_ordering(576) 00:12:23.802 fused_ordering(577) 00:12:23.802 fused_ordering(578) 00:12:23.802 fused_ordering(579) 00:12:23.802 fused_ordering(580) 00:12:23.802 fused_ordering(581) 00:12:23.802 fused_ordering(582) 00:12:23.802 fused_ordering(583) 00:12:23.802 fused_ordering(584) 00:12:23.802 fused_ordering(585) 00:12:23.802 fused_ordering(586) 00:12:23.802 fused_ordering(587) 00:12:23.802 fused_ordering(588) 00:12:23.802 fused_ordering(589) 00:12:23.802 fused_ordering(590) 00:12:23.802 fused_ordering(591) 00:12:23.802 fused_ordering(592) 00:12:23.802 fused_ordering(593) 00:12:23.802 fused_ordering(594) 00:12:23.802 fused_ordering(595) 00:12:23.802 fused_ordering(596) 00:12:23.802 fused_ordering(597) 00:12:23.802 fused_ordering(598) 00:12:23.802 fused_ordering(599) 00:12:23.802 fused_ordering(600) 00:12:23.802 fused_ordering(601) 00:12:23.802 fused_ordering(602) 00:12:23.802 fused_ordering(603) 00:12:23.802 fused_ordering(604) 00:12:23.802 fused_ordering(605) 00:12:23.802 fused_ordering(606) 00:12:23.802 fused_ordering(607) 00:12:23.802 fused_ordering(608) 00:12:23.802 fused_ordering(609) 00:12:23.802 fused_ordering(610) 00:12:23.802 fused_ordering(611) 00:12:23.802 fused_ordering(612) 00:12:23.802 fused_ordering(613) 00:12:23.802 fused_ordering(614) 00:12:23.802 fused_ordering(615) 00:12:23.802 fused_ordering(616) 00:12:23.802 fused_ordering(617) 00:12:23.802 fused_ordering(618) 00:12:23.802 fused_ordering(619) 00:12:23.802 fused_ordering(620) 00:12:23.802 fused_ordering(621) 00:12:23.802 fused_ordering(622) 00:12:23.802 fused_ordering(623) 00:12:23.802 fused_ordering(624) 00:12:23.802 fused_ordering(625) 00:12:23.802 fused_ordering(626) 00:12:23.802 fused_ordering(627) 00:12:23.802 fused_ordering(628) 00:12:23.802 fused_ordering(629) 00:12:23.802 fused_ordering(630) 00:12:23.802 fused_ordering(631) 00:12:23.802 fused_ordering(632) 00:12:23.802 fused_ordering(633) 00:12:23.802 fused_ordering(634) 00:12:23.802 fused_ordering(635) 00:12:23.802 fused_ordering(636) 00:12:23.802 fused_ordering(637) 00:12:23.802 fused_ordering(638) 00:12:23.802 fused_ordering(639) 00:12:23.802 fused_ordering(640) 00:12:23.802 fused_ordering(641) 00:12:23.802 fused_ordering(642) 00:12:23.802 fused_ordering(643) 00:12:23.802 fused_ordering(644) 00:12:23.802 fused_ordering(645) 00:12:23.802 fused_ordering(646) 00:12:23.802 fused_ordering(647) 00:12:23.802 fused_ordering(648) 00:12:23.802 fused_ordering(649) 00:12:23.802 fused_ordering(650) 00:12:23.802 fused_ordering(651) 00:12:23.802 fused_ordering(652) 00:12:23.802 fused_ordering(653) 00:12:23.802 fused_ordering(654) 00:12:23.802 fused_ordering(655) 00:12:23.802 fused_ordering(656) 00:12:23.802 fused_ordering(657) 00:12:23.802 fused_ordering(658) 00:12:23.802 fused_ordering(659) 00:12:23.802 fused_ordering(660) 00:12:23.802 fused_ordering(661) 00:12:23.802 fused_ordering(662) 00:12:23.802 fused_ordering(663) 00:12:23.802 fused_ordering(664) 00:12:23.802 fused_ordering(665) 00:12:23.802 fused_ordering(666) 00:12:23.802 fused_ordering(667) 00:12:23.802 fused_ordering(668) 00:12:23.802 fused_ordering(669) 00:12:23.802 fused_ordering(670) 00:12:23.802 fused_ordering(671) 00:12:23.802 fused_ordering(672) 00:12:23.802 fused_ordering(673) 00:12:23.802 fused_ordering(674) 00:12:23.802 fused_ordering(675) 00:12:23.802 fused_ordering(676) 00:12:23.802 fused_ordering(677) 00:12:23.802 fused_ordering(678) 00:12:23.802 fused_ordering(679) 00:12:23.802 fused_ordering(680) 00:12:23.802 fused_ordering(681) 00:12:23.802 fused_ordering(682) 00:12:23.802 fused_ordering(683) 00:12:23.802 fused_ordering(684) 00:12:23.802 fused_ordering(685) 00:12:23.802 fused_ordering(686) 00:12:23.802 fused_ordering(687) 00:12:23.802 fused_ordering(688) 00:12:23.802 fused_ordering(689) 00:12:23.802 fused_ordering(690) 00:12:23.802 fused_ordering(691) 00:12:23.802 fused_ordering(692) 00:12:23.802 fused_ordering(693) 00:12:23.802 fused_ordering(694) 00:12:23.802 fused_ordering(695) 00:12:23.802 fused_ordering(696) 00:12:23.802 fused_ordering(697) 00:12:23.802 fused_ordering(698) 00:12:23.802 fused_ordering(699) 00:12:23.802 fused_ordering(700) 00:12:23.802 fused_ordering(701) 00:12:23.802 fused_ordering(702) 00:12:23.802 fused_ordering(703) 00:12:23.802 fused_ordering(704) 00:12:23.802 fused_ordering(705) 00:12:23.802 fused_ordering(706) 00:12:23.802 fused_ordering(707) 00:12:23.802 fused_ordering(708) 00:12:23.802 fused_ordering(709) 00:12:23.802 fused_ordering(710) 00:12:23.802 fused_ordering(711) 00:12:23.802 fused_ordering(712) 00:12:23.802 fused_ordering(713) 00:12:23.802 fused_ordering(714) 00:12:23.802 fused_ordering(715) 00:12:23.802 fused_ordering(716) 00:12:23.802 fused_ordering(717) 00:12:23.802 fused_ordering(718) 00:12:23.802 fused_ordering(719) 00:12:23.802 fused_ordering(720) 00:12:23.802 fused_ordering(721) 00:12:23.802 fused_ordering(722) 00:12:23.802 fused_ordering(723) 00:12:23.802 fused_ordering(724) 00:12:23.802 fused_ordering(725) 00:12:23.802 fused_ordering(726) 00:12:23.802 fused_ordering(727) 00:12:23.802 fused_ordering(728) 00:12:23.802 fused_ordering(729) 00:12:23.802 fused_ordering(730) 00:12:23.802 fused_ordering(731) 00:12:23.802 fused_ordering(732) 00:12:23.802 fused_ordering(733) 00:12:23.802 fused_ordering(734) 00:12:23.802 fused_ordering(735) 00:12:23.802 fused_ordering(736) 00:12:23.802 fused_ordering(737) 00:12:23.802 fused_ordering(738) 00:12:23.802 fused_ordering(739) 00:12:23.802 fused_ordering(740) 00:12:23.802 fused_ordering(741) 00:12:23.802 fused_ordering(742) 00:12:23.802 fused_ordering(743) 00:12:23.802 fused_ordering(744) 00:12:23.802 fused_ordering(745) 00:12:23.802 fused_ordering(746) 00:12:23.802 fused_ordering(747) 00:12:23.802 fused_ordering(748) 00:12:23.802 fused_ordering(749) 00:12:23.802 fused_ordering(750) 00:12:23.802 fused_ordering(751) 00:12:23.802 fused_ordering(752) 00:12:23.802 fused_ordering(753) 00:12:23.802 fused_ordering(754) 00:12:23.802 fused_ordering(755) 00:12:23.802 fused_ordering(756) 00:12:23.802 fused_ordering(757) 00:12:23.802 fused_ordering(758) 00:12:23.802 fused_ordering(759) 00:12:23.802 fused_ordering(760) 00:12:23.802 fused_ordering(761) 00:12:23.802 fused_ordering(762) 00:12:23.802 fused_ordering(763) 00:12:23.802 fused_ordering(764) 00:12:23.802 fused_ordering(765) 00:12:23.802 fused_ordering(766) 00:12:23.802 fused_ordering(767) 00:12:23.802 fused_ordering(768) 00:12:23.802 fused_ordering(769) 00:12:23.802 fused_ordering(770) 00:12:23.802 fused_ordering(771) 00:12:23.802 fused_ordering(772) 00:12:23.802 fused_ordering(773) 00:12:23.802 fused_ordering(774) 00:12:23.802 fused_ordering(775) 00:12:23.802 fused_ordering(776) 00:12:23.802 fused_ordering(777) 00:12:23.802 fused_ordering(778) 00:12:23.802 fused_ordering(779) 00:12:23.802 fused_ordering(780) 00:12:23.802 fused_ordering(781) 00:12:23.802 fused_ordering(782) 00:12:23.802 fused_ordering(783) 00:12:23.802 fused_ordering(784) 00:12:23.802 fused_ordering(785) 00:12:23.802 fused_ordering(786) 00:12:23.802 fused_ordering(787) 00:12:23.802 fused_ordering(788) 00:12:23.803 fused_ordering(789) 00:12:23.803 fused_ordering(790) 00:12:23.803 fused_ordering(791) 00:12:23.803 fused_ordering(792) 00:12:23.803 fused_ordering(793) 00:12:23.803 fused_ordering(794) 00:12:23.803 fused_ordering(795) 00:12:23.803 fused_ordering(796) 00:12:23.803 fused_ordering(797) 00:12:23.803 fused_ordering(798) 00:12:23.803 fused_ordering(799) 00:12:23.803 fused_ordering(800) 00:12:23.803 fused_ordering(801) 00:12:23.803 fused_ordering(802) 00:12:23.803 fused_ordering(803) 00:12:23.803 fused_ordering(804) 00:12:23.803 fused_ordering(805) 00:12:23.803 fused_ordering(806) 00:12:23.803 fused_ordering(807) 00:12:23.803 fused_ordering(808) 00:12:23.803 fused_ordering(809) 00:12:23.803 fused_ordering(810) 00:12:23.803 fused_ordering(811) 00:12:23.803 fused_ordering(812) 00:12:23.803 fused_ordering(813) 00:12:23.803 fused_ordering(814) 00:12:23.803 fused_ordering(815) 00:12:23.803 fused_ordering(816) 00:12:23.803 fused_ordering(817) 00:12:23.803 fused_ordering(818) 00:12:23.803 fused_ordering(819) 00:12:23.803 fused_ordering(820) 00:12:24.062 fused_ordering(821) 00:12:24.062 fused_ordering(822) 00:12:24.062 fused_ordering(823) 00:12:24.062 fused_ordering(824) 00:12:24.062 fused_ordering(825) 00:12:24.062 fused_ordering(826) 00:12:24.062 fused_ordering(827) 00:12:24.062 fused_ordering(828) 00:12:24.062 fused_ordering(829) 00:12:24.062 fused_ordering(830) 00:12:24.062 fused_ordering(831) 00:12:24.062 fused_ordering(832) 00:12:24.062 fused_ordering(833) 00:12:24.062 fused_ordering(834) 00:12:24.062 fused_ordering(835) 00:12:24.062 fused_ordering(836) 00:12:24.062 fused_ordering(837) 00:12:24.062 fused_ordering(838) 00:12:24.062 fused_ordering(839) 00:12:24.062 fused_ordering(840) 00:12:24.062 fused_ordering(841) 00:12:24.062 fused_ordering(842) 00:12:24.062 fused_ordering(843) 00:12:24.062 fused_ordering(844) 00:12:24.062 fused_ordering(845) 00:12:24.062 fused_ordering(846) 00:12:24.062 fused_ordering(847) 00:12:24.062 fused_ordering(848) 00:12:24.062 fused_ordering(849) 00:12:24.062 fused_ordering(850) 00:12:24.062 fused_ordering(851) 00:12:24.062 fused_ordering(852) 00:12:24.062 fused_ordering(853) 00:12:24.062 fused_ordering(854) 00:12:24.062 fused_ordering(855) 00:12:24.062 fused_ordering(856) 00:12:24.062 fused_ordering(857) 00:12:24.062 fused_ordering(858) 00:12:24.062 fused_ordering(859) 00:12:24.062 fused_ordering(860) 00:12:24.062 fused_ordering(861) 00:12:24.062 fused_ordering(862) 00:12:24.062 fused_ordering(863) 00:12:24.062 fused_ordering(864) 00:12:24.062 fused_ordering(865) 00:12:24.062 fused_ordering(866) 00:12:24.062 fused_ordering(867) 00:12:24.062 fused_ordering(868) 00:12:24.062 fused_ordering(869) 00:12:24.062 fused_ordering(870) 00:12:24.062 fused_ordering(871) 00:12:24.062 fused_ordering(872) 00:12:24.062 fused_ordering(873) 00:12:24.062 fused_ordering(874) 00:12:24.062 fused_ordering(875) 00:12:24.062 fused_ordering(876) 00:12:24.062 fused_ordering(877) 00:12:24.062 fused_ordering(878) 00:12:24.062 fused_ordering(879) 00:12:24.062 fused_ordering(880) 00:12:24.062 fused_ordering(881) 00:12:24.062 fused_ordering(882) 00:12:24.062 fused_ordering(883) 00:12:24.062 fused_ordering(884) 00:12:24.062 fused_ordering(885) 00:12:24.062 fused_ordering(886) 00:12:24.062 fused_ordering(887) 00:12:24.062 fused_ordering(888) 00:12:24.062 fused_ordering(889) 00:12:24.062 fused_ordering(890) 00:12:24.062 fused_ordering(891) 00:12:24.062 fused_ordering(892) 00:12:24.062 fused_ordering(893) 00:12:24.062 fused_ordering(894) 00:12:24.062 fused_ordering(895) 00:12:24.062 fused_ordering(896) 00:12:24.062 fused_ordering(897) 00:12:24.062 fused_ordering(898) 00:12:24.062 fused_ordering(899) 00:12:24.062 fused_ordering(900) 00:12:24.062 fused_ordering(901) 00:12:24.062 fused_ordering(902) 00:12:24.062 fused_ordering(903) 00:12:24.062 fused_ordering(904) 00:12:24.062 fused_ordering(905) 00:12:24.062 fused_ordering(906) 00:12:24.062 fused_ordering(907) 00:12:24.062 fused_ordering(908) 00:12:24.062 fused_ordering(909) 00:12:24.062 fused_ordering(910) 00:12:24.062 fused_ordering(911) 00:12:24.062 fused_ordering(912) 00:12:24.062 fused_ordering(913) 00:12:24.062 fused_ordering(914) 00:12:24.062 fused_ordering(915) 00:12:24.062 fused_ordering(916) 00:12:24.062 fused_ordering(917) 00:12:24.062 fused_ordering(918) 00:12:24.062 fused_ordering(919) 00:12:24.062 fused_ordering(920) 00:12:24.062 fused_ordering(921) 00:12:24.062 fused_ordering(922) 00:12:24.062 fused_ordering(923) 00:12:24.062 fused_ordering(924) 00:12:24.062 fused_ordering(925) 00:12:24.062 fused_ordering(926) 00:12:24.062 fused_ordering(927) 00:12:24.062 fused_ordering(928) 00:12:24.062 fused_ordering(929) 00:12:24.062 fused_ordering(930) 00:12:24.062 fused_ordering(931) 00:12:24.062 fused_ordering(932) 00:12:24.062 fused_ordering(933) 00:12:24.062 fused_ordering(934) 00:12:24.062 fused_ordering(935) 00:12:24.062 fused_ordering(936) 00:12:24.062 fused_ordering(937) 00:12:24.062 fused_ordering(938) 00:12:24.062 fused_ordering(939) 00:12:24.062 fused_ordering(940) 00:12:24.062 fused_ordering(941) 00:12:24.062 fused_ordering(942) 00:12:24.062 fused_ordering(943) 00:12:24.062 fused_ordering(944) 00:12:24.062 fused_ordering(945) 00:12:24.062 fused_ordering(946) 00:12:24.062 fused_ordering(947) 00:12:24.062 fused_ordering(948) 00:12:24.062 fused_ordering(949) 00:12:24.062 fused_ordering(950) 00:12:24.062 fused_ordering(951) 00:12:24.062 fused_ordering(952) 00:12:24.062 fused_ordering(953) 00:12:24.062 fused_ordering(954) 00:12:24.062 fused_ordering(955) 00:12:24.062 fused_ordering(956) 00:12:24.062 fused_ordering(957) 00:12:24.062 fused_ordering(958) 00:12:24.062 fused_ordering(959) 00:12:24.062 fused_ordering(960) 00:12:24.062 fused_ordering(961) 00:12:24.062 fused_ordering(962) 00:12:24.062 fused_ordering(963) 00:12:24.062 fused_ordering(964) 00:12:24.062 fused_ordering(965) 00:12:24.062 fused_ordering(966) 00:12:24.062 fused_ordering(967) 00:12:24.062 fused_ordering(968) 00:12:24.062 fused_ordering(969) 00:12:24.062 fused_ordering(970) 00:12:24.062 fused_ordering(971) 00:12:24.062 fused_ordering(972) 00:12:24.062 fused_ordering(973) 00:12:24.062 fused_ordering(974) 00:12:24.062 fused_ordering(975) 00:12:24.062 fused_ordering(976) 00:12:24.062 fused_ordering(977) 00:12:24.062 fused_ordering(978) 00:12:24.062 fused_ordering(979) 00:12:24.062 fused_ordering(980) 00:12:24.062 fused_ordering(981) 00:12:24.062 fused_ordering(982) 00:12:24.062 fused_ordering(983) 00:12:24.062 fused_ordering(984) 00:12:24.062 fused_ordering(985) 00:12:24.062 fused_ordering(986) 00:12:24.062 fused_ordering(987) 00:12:24.062 fused_ordering(988) 00:12:24.062 fused_ordering(989) 00:12:24.062 fused_ordering(990) 00:12:24.062 fused_ordering(991) 00:12:24.062 fused_ordering(992) 00:12:24.062 fused_ordering(993) 00:12:24.062 fused_ordering(994) 00:12:24.062 fused_ordering(995) 00:12:24.062 fused_ordering(996) 00:12:24.062 fused_ordering(997) 00:12:24.062 fused_ordering(998) 00:12:24.062 fused_ordering(999) 00:12:24.062 fused_ordering(1000) 00:12:24.062 fused_ordering(1001) 00:12:24.062 fused_ordering(1002) 00:12:24.062 fused_ordering(1003) 00:12:24.062 fused_ordering(1004) 00:12:24.062 fused_ordering(1005) 00:12:24.062 fused_ordering(1006) 00:12:24.062 fused_ordering(1007) 00:12:24.062 fused_ordering(1008) 00:12:24.062 fused_ordering(1009) 00:12:24.062 fused_ordering(1010) 00:12:24.062 fused_ordering(1011) 00:12:24.062 fused_ordering(1012) 00:12:24.062 fused_ordering(1013) 00:12:24.062 fused_ordering(1014) 00:12:24.062 fused_ordering(1015) 00:12:24.062 fused_ordering(1016) 00:12:24.062 fused_ordering(1017) 00:12:24.062 fused_ordering(1018) 00:12:24.062 fused_ordering(1019) 00:12:24.062 fused_ordering(1020) 00:12:24.062 fused_ordering(1021) 00:12:24.062 fused_ordering(1022) 00:12:24.062 fused_ordering(1023) 00:12:24.062 10:40:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:24.062 10:40:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:24.062 10:40:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:24.062 10:40:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:12:24.062 10:40:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:24.062 10:40:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:24.062 10:40:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:12:24.062 10:40:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:24.062 10:40:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:24.062 rmmod nvme_rdma 00:12:24.062 rmmod nvme_fabrics 00:12:24.062 10:40:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:24.062 10:40:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:12:24.062 10:40:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:12:24.062 10:40:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 3737734 ']' 00:12:24.062 10:40:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 3737734 00:12:24.062 10:40:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' -z 3737734 ']' 00:12:24.062 10:40:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # kill -0 3737734 00:12:24.062 10:40:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # uname 00:12:24.062 10:40:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:24.062 10:40:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3737734 00:12:24.322 10:40:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:12:24.322 10:40:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:12:24.322 10:40:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3737734' 00:12:24.322 killing process with pid 3737734 00:12:24.322 10:40:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # kill 3737734 00:12:24.322 10:40:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@976 -- # wait 3737734 00:12:24.322 10:40:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:24.322 10:40:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:24.322 00:12:24.322 real 0m8.457s 00:12:24.322 user 0m3.936s 00:12:24.322 sys 0m5.738s 00:12:24.322 10:40:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:24.322 10:40:51 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:24.322 ************************************ 00:12:24.322 END TEST nvmf_fused_ordering 00:12:24.322 ************************************ 00:12:24.322 10:40:51 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:12:24.322 10:40:51 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:24.322 10:40:51 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:24.322 10:40:51 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:24.322 ************************************ 00:12:24.322 START TEST nvmf_ns_masking 00:12:24.322 ************************************ 00:12:24.322 10:40:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1127 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:12:24.581 * Looking for test storage... 00:12:24.581 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:24.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.581 --rc genhtml_branch_coverage=1 00:12:24.581 --rc genhtml_function_coverage=1 00:12:24.581 --rc genhtml_legend=1 00:12:24.581 --rc geninfo_all_blocks=1 00:12:24.581 --rc geninfo_unexecuted_blocks=1 00:12:24.581 00:12:24.581 ' 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:24.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.581 --rc genhtml_branch_coverage=1 00:12:24.581 --rc genhtml_function_coverage=1 00:12:24.581 --rc genhtml_legend=1 00:12:24.581 --rc geninfo_all_blocks=1 00:12:24.581 --rc geninfo_unexecuted_blocks=1 00:12:24.581 00:12:24.581 ' 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:24.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.581 --rc genhtml_branch_coverage=1 00:12:24.581 --rc genhtml_function_coverage=1 00:12:24.581 --rc genhtml_legend=1 00:12:24.581 --rc geninfo_all_blocks=1 00:12:24.581 --rc geninfo_unexecuted_blocks=1 00:12:24.581 00:12:24.581 ' 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:24.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.581 --rc genhtml_branch_coverage=1 00:12:24.581 --rc genhtml_function_coverage=1 00:12:24.581 --rc genhtml_legend=1 00:12:24.581 --rc geninfo_all_blocks=1 00:12:24.581 --rc geninfo_unexecuted_blocks=1 00:12:24.581 00:12:24.581 ' 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.581 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.582 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.582 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:24.582 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.582 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:12:24.582 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:24.582 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:24.582 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:24.582 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:24.582 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:24.582 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:24.582 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:24.582 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:24.582 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:24.582 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:24.582 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:24.582 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:24.582 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:24.582 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:24.582 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=eb085539-55dc-47bd-a51f-d806a15b8778 00:12:24.582 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:24.582 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=016b8084-9c26-4144-9ec3-226d93c323ee 00:12:24.582 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:24.582 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:24.582 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:24.582 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:24.582 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=82cb1d78-d364-40c2-9f20-590fb7b06721 00:12:24.582 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:24.582 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:24.582 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:24.582 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:24.582 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:24.582 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:24.582 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.582 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:24.582 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.582 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:24.582 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:24.582 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:12:24.582 10:40:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:12:32.800 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:12:32.800 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:12:32.800 Found net devices under 0000:d9:00.0: mlx_0_0 00:12:32.800 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:12:32.801 Found net devices under 0000:d9:00.1: mlx_0_1 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # rdma_device_init 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # uname 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:32.801 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:32.801 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:12:32.801 altname enp217s0f0np0 00:12:32.801 altname ens818f0np0 00:12:32.801 inet 192.168.100.8/24 scope global mlx_0_0 00:12:32.801 valid_lft forever preferred_lft forever 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:32.801 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:32.801 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:12:32.801 altname enp217s0f1np1 00:12:32.801 altname ens818f1np1 00:12:32.801 inet 192.168.100.9/24 scope global mlx_0_1 00:12:32.801 valid_lft forever preferred_lft forever 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:32.801 192.168.100.9' 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:32.801 192.168.100.9' 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # head -n 1 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:32.801 192.168.100.9' 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # tail -n +2 00:12:32.801 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # head -n 1 00:12:32.802 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:32.802 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:32.802 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:32.802 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:32.802 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:32.802 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:32.802 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:32.802 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:32.802 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:32.802 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:32.802 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=3741406 00:12:32.802 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 3741406 00:12:32.802 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:32.802 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 3741406 ']' 00:12:32.802 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.802 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:32.802 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.802 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:32.802 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:32.802 [2024-11-07 10:40:59.294574] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:12:32.802 [2024-11-07 10:40:59.294620] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.802 [2024-11-07 10:40:59.369222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.802 [2024-11-07 10:40:59.407953] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:32.802 [2024-11-07 10:40:59.407991] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:32.802 [2024-11-07 10:40:59.408000] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:32.802 [2024-11-07 10:40:59.408008] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:32.802 [2024-11-07 10:40:59.408031] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:32.802 [2024-11-07 10:40:59.408647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.802 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:32.802 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:12:32.802 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:32.802 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:32.802 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:32.802 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:32.802 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:32.802 [2024-11-07 10:40:59.724856] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x177eb80/0x1783070) succeed. 00:12:32.802 [2024-11-07 10:40:59.733852] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1780030/0x17c4710) succeed. 00:12:32.802 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:32.802 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:32.802 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:32.802 Malloc1 00:12:32.802 10:40:59 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:32.802 Malloc2 00:12:32.802 10:41:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:32.802 10:41:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:33.061 10:41:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:33.320 [2024-11-07 10:41:00.749112] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:33.320 10:41:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:33.320 10:41:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 82cb1d78-d364-40c2-9f20-590fb7b06721 -a 192.168.100.8 -s 4420 -i 4 00:12:33.579 10:41:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:33.579 10:41:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:12:33.580 10:41:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:33.580 10:41:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:33.580 10:41:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:12:35.484 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:35.484 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:35.484 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:35.484 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:35.484 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:35.484 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:12:35.484 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:35.484 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:35.484 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:35.484 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:35.484 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:35.743 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:35.743 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:35.743 [ 0]:0x1 00:12:35.743 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:35.743 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:35.743 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1089a72aa38d4640bc424bd38616fc45 00:12:35.743 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1089a72aa38d4640bc424bd38616fc45 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:35.743 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:35.743 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:35.743 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:35.743 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:35.743 [ 0]:0x1 00:12:36.003 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:36.003 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:36.003 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1089a72aa38d4640bc424bd38616fc45 00:12:36.003 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1089a72aa38d4640bc424bd38616fc45 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:36.003 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:36.003 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:36.003 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:36.003 [ 1]:0x2 00:12:36.003 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:36.003 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:36.003 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e9160ddd1ba84f6b866493cae4de9908 00:12:36.003 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e9160ddd1ba84f6b866493cae4de9908 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:36.003 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:36.003 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:36.262 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.262 10:41:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:36.521 10:41:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:36.779 10:41:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:36.780 10:41:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 82cb1d78-d364-40c2-9f20-590fb7b06721 -a 192.168.100.8 -s 4420 -i 4 00:12:37.039 10:41:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:37.039 10:41:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:12:37.039 10:41:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:37.039 10:41:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 1 ]] 00:12:37.039 10:41:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=1 00:12:37.039 10:41:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:12:38.946 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:38.946 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:38.946 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:38.946 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:38.946 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:38.946 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:12:38.946 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:38.946 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:38.946 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:38.946 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:38.946 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:38.946 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:12:38.946 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:12:38.946 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:12:38.946 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:38.946 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:12:38.946 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:38.946 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:12:38.946 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:38.946 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:39.205 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:39.205 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:39.205 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:39.205 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:39.205 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:39.205 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:39.205 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:39.205 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:39.205 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:39.205 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:39.205 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:39.205 [ 0]:0x2 00:12:39.205 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:39.205 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:39.205 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e9160ddd1ba84f6b866493cae4de9908 00:12:39.205 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e9160ddd1ba84f6b866493cae4de9908 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:39.205 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:39.464 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:39.464 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:39.465 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:39.465 [ 0]:0x1 00:12:39.465 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:39.465 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:39.465 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1089a72aa38d4640bc424bd38616fc45 00:12:39.465 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1089a72aa38d4640bc424bd38616fc45 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:39.465 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:39.465 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:39.465 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:39.465 [ 1]:0x2 00:12:39.465 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:39.465 10:41:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:39.465 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e9160ddd1ba84f6b866493cae4de9908 00:12:39.465 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e9160ddd1ba84f6b866493cae4de9908 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:39.465 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:39.724 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:39.724 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:12:39.724 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:12:39.724 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:12:39.724 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:39.724 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:12:39.724 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:39.724 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:12:39.724 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:39.724 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:39.724 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:39.724 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:39.724 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:39.725 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:39.725 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:39.725 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:39.725 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:39.725 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:39.725 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:39.725 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:39.725 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:39.725 [ 0]:0x2 00:12:39.725 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:39.725 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:39.725 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e9160ddd1ba84f6b866493cae4de9908 00:12:39.725 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e9160ddd1ba84f6b866493cae4de9908 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:39.725 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:39.725 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:39.984 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.984 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:40.243 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:40.243 10:41:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 82cb1d78-d364-40c2-9f20-590fb7b06721 -a 192.168.100.8 -s 4420 -i 4 00:12:40.501 10:41:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:40.501 10:41:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:12:40.501 10:41:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:40.501 10:41:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:12:40.501 10:41:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:12:40.501 10:41:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:12:43.037 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:43.037 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:43.037 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:43.037 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:12:43.037 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:43.037 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:12:43.037 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:43.037 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:43.037 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:43.037 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:43.037 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:43.037 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:43.037 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:43.037 [ 0]:0x1 00:12:43.037 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:43.037 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:43.037 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1089a72aa38d4640bc424bd38616fc45 00:12:43.037 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1089a72aa38d4640bc424bd38616fc45 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:43.037 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:43.037 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:43.037 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:43.037 [ 1]:0x2 00:12:43.037 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:43.037 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:43.037 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e9160ddd1ba84f6b866493cae4de9908 00:12:43.037 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e9160ddd1ba84f6b866493cae4de9908 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:43.037 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:43.037 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:43.037 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:12:43.037 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:12:43.037 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:12:43.037 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:43.037 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:12:43.037 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:43.037 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:12:43.037 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:43.037 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:43.037 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:43.037 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:43.037 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:43.037 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:43.037 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:43.037 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:43.037 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:43.037 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:43.037 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:43.038 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:43.038 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:43.038 [ 0]:0x2 00:12:43.038 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:43.038 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:43.038 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e9160ddd1ba84f6b866493cae4de9908 00:12:43.038 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e9160ddd1ba84f6b866493cae4de9908 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:43.038 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:43.038 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:12:43.038 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:43.038 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:43.038 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:43.038 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:43.038 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:43.038 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:43.038 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:43.038 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:43.038 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:12:43.038 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:43.298 [2024-11-07 10:41:10.776464] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:43.298 request: 00:12:43.298 { 00:12:43.298 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:43.298 "nsid": 2, 00:12:43.298 "host": "nqn.2016-06.io.spdk:host1", 00:12:43.298 "method": "nvmf_ns_remove_host", 00:12:43.298 "req_id": 1 00:12:43.298 } 00:12:43.298 Got JSON-RPC error response 00:12:43.298 response: 00:12:43.298 { 00:12:43.298 "code": -32602, 00:12:43.298 "message": "Invalid parameters" 00:12:43.298 } 00:12:43.298 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:43.298 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:43.298 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:43.298 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:43.298 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:43.298 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:12:43.298 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:12:43.298 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:12:43.298 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:43.298 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:12:43.298 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:43.298 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:12:43.298 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:43.298 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:43.298 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:43.298 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:43.298 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:43.298 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:43.298 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:43.298 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:43.298 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:43.298 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:43.298 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:43.298 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:43.298 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:43.298 [ 0]:0x2 00:12:43.298 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:43.298 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:43.298 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e9160ddd1ba84f6b866493cae4de9908 00:12:43.298 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e9160ddd1ba84f6b866493cae4de9908 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:43.298 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:43.298 10:41:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:43.558 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.558 10:41:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3743452 00:12:43.558 10:41:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:43.558 10:41:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:43.558 10:41:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3743452 /var/tmp/host.sock 00:12:43.558 10:41:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 3743452 ']' 00:12:43.558 10:41:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:12:43.558 10:41:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:43.558 10:41:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:43.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:43.558 10:41:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:43.558 10:41:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:43.817 [2024-11-07 10:41:11.258183] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:12:43.817 [2024-11-07 10:41:11.258237] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3743452 ] 00:12:43.817 [2024-11-07 10:41:11.328070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.817 [2024-11-07 10:41:11.367078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:44.078 10:41:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:44.078 10:41:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:12:44.078 10:41:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:44.337 10:41:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:44.337 10:41:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid eb085539-55dc-47bd-a51f-d806a15b8778 00:12:44.337 10:41:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:44.337 10:41:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g EB08553955DC47BDA51FD806A15B8778 -i 00:12:44.597 10:41:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 016b8084-9c26-4144-9ec3-226d93c323ee 00:12:44.597 10:41:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:44.597 10:41:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 016B80849C2641449EC3226D93C323EE -i 00:12:44.857 10:41:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:44.857 10:41:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:45.117 10:41:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:45.117 10:41:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:45.377 nvme0n1 00:12:45.377 10:41:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:45.377 10:41:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:45.636 nvme1n2 00:12:45.636 10:41:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:45.636 10:41:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:45.636 10:41:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:45.636 10:41:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:45.636 10:41:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:45.895 10:41:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:45.896 10:41:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:45.896 10:41:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:45.896 10:41:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:46.155 10:41:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ eb085539-55dc-47bd-a51f-d806a15b8778 == \e\b\0\8\5\5\3\9\-\5\5\d\c\-\4\7\b\d\-\a\5\1\f\-\d\8\0\6\a\1\5\b\8\7\7\8 ]] 00:12:46.155 10:41:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:46.155 10:41:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:46.155 10:41:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:46.155 10:41:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 016b8084-9c26-4144-9ec3-226d93c323ee == \0\1\6\b\8\0\8\4\-\9\c\2\6\-\4\1\4\4\-\9\e\c\3\-\2\2\6\d\9\3\c\3\2\3\e\e ]] 00:12:46.155 10:41:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:46.414 10:41:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:46.674 10:41:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid eb085539-55dc-47bd-a51f-d806a15b8778 00:12:46.674 10:41:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:46.674 10:41:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g EB08553955DC47BDA51FD806A15B8778 00:12:46.674 10:41:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:12:46.674 10:41:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g EB08553955DC47BDA51FD806A15B8778 00:12:46.674 10:41:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:46.674 10:41:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:46.674 10:41:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:46.674 10:41:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:46.674 10:41:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:46.674 10:41:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:46.674 10:41:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:46.674 10:41:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:12:46.674 10:41:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g EB08553955DC47BDA51FD806A15B8778 00:12:46.934 [2024-11-07 10:41:14.361253] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:12:46.934 [2024-11-07 10:41:14.361290] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:12:46.934 [2024-11-07 10:41:14.361301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.934 request: 00:12:46.934 { 00:12:46.934 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:46.934 "namespace": { 00:12:46.934 "bdev_name": "invalid", 00:12:46.934 "nsid": 1, 00:12:46.934 "nguid": "EB08553955DC47BDA51FD806A15B8778", 00:12:46.934 "no_auto_visible": false 00:12:46.934 }, 00:12:46.934 "method": "nvmf_subsystem_add_ns", 00:12:46.934 "req_id": 1 00:12:46.934 } 00:12:46.934 Got JSON-RPC error response 00:12:46.934 response: 00:12:46.934 { 00:12:46.934 "code": -32602, 00:12:46.934 "message": "Invalid parameters" 00:12:46.934 } 00:12:46.934 10:41:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:46.934 10:41:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:46.934 10:41:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:46.934 10:41:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:46.934 10:41:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid eb085539-55dc-47bd-a51f-d806a15b8778 00:12:46.934 10:41:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:46.934 10:41:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g EB08553955DC47BDA51FD806A15B8778 -i 00:12:46.934 10:41:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:12:49.471 10:41:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:12:49.471 10:41:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:12:49.471 10:41:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:49.471 10:41:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:12:49.471 10:41:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 3743452 00:12:49.471 10:41:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 3743452 ']' 00:12:49.471 10:41:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 3743452 00:12:49.471 10:41:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:12:49.471 10:41:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:49.471 10:41:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3743452 00:12:49.471 10:41:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:12:49.471 10:41:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:12:49.471 10:41:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3743452' 00:12:49.471 killing process with pid 3743452 00:12:49.471 10:41:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 3743452 00:12:49.471 10:41:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 3743452 00:12:49.730 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:49.730 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:12:49.730 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:12:49.730 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:49.730 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:12:49.730 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:49.730 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:49.730 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:12:49.730 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:49.730 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:49.730 rmmod nvme_rdma 00:12:49.730 rmmod nvme_fabrics 00:12:49.730 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:49.730 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:12:49.730 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:12:49.730 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 3741406 ']' 00:12:49.730 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 3741406 00:12:49.730 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 3741406 ']' 00:12:49.730 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 3741406 00:12:49.730 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:12:49.989 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:49.989 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3741406 00:12:49.989 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:49.989 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:49.989 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3741406' 00:12:49.989 killing process with pid 3741406 00:12:49.989 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 3741406 00:12:49.989 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 3741406 00:12:50.249 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:50.249 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:50.249 00:12:50.249 real 0m25.718s 00:12:50.249 user 0m31.700s 00:12:50.249 sys 0m7.808s 00:12:50.249 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:50.249 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:50.249 ************************************ 00:12:50.249 END TEST nvmf_ns_masking 00:12:50.249 ************************************ 00:12:50.249 10:41:17 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:12:50.249 10:41:17 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:12:50.249 10:41:17 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:50.249 10:41:17 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:50.249 10:41:17 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:50.249 ************************************ 00:12:50.249 START TEST nvmf_nvme_cli 00:12:50.249 ************************************ 00:12:50.249 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:12:50.249 * Looking for test storage... 00:12:50.249 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:50.249 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:50.249 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:12:50.249 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:50.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.510 --rc genhtml_branch_coverage=1 00:12:50.510 --rc genhtml_function_coverage=1 00:12:50.510 --rc genhtml_legend=1 00:12:50.510 --rc geninfo_all_blocks=1 00:12:50.510 --rc geninfo_unexecuted_blocks=1 00:12:50.510 00:12:50.510 ' 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:50.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.510 --rc genhtml_branch_coverage=1 00:12:50.510 --rc genhtml_function_coverage=1 00:12:50.510 --rc genhtml_legend=1 00:12:50.510 --rc geninfo_all_blocks=1 00:12:50.510 --rc geninfo_unexecuted_blocks=1 00:12:50.510 00:12:50.510 ' 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:50.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.510 --rc genhtml_branch_coverage=1 00:12:50.510 --rc genhtml_function_coverage=1 00:12:50.510 --rc genhtml_legend=1 00:12:50.510 --rc geninfo_all_blocks=1 00:12:50.510 --rc geninfo_unexecuted_blocks=1 00:12:50.510 00:12:50.510 ' 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:50.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.510 --rc genhtml_branch_coverage=1 00:12:50.510 --rc genhtml_function_coverage=1 00:12:50.510 --rc genhtml_legend=1 00:12:50.510 --rc geninfo_all_blocks=1 00:12:50.510 --rc geninfo_unexecuted_blocks=1 00:12:50.510 00:12:50.510 ' 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.510 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.511 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.511 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:50.511 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.511 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:12:50.511 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:50.511 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:50.511 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:50.511 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:50.511 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:50.511 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:50.511 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:50.511 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:50.511 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:50.511 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:50.511 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:50.511 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:50.511 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:50.511 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:50.511 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:50.511 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:50.511 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:50.511 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:50.511 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:50.511 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.511 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:50.511 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.511 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:50.511 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:50.511 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:12:50.511 10:41:17 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:12:57.084 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:12:57.084 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:12:57.084 Found net devices under 0000:d9:00.0: mlx_0_0 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:12:57.084 Found net devices under 0000:d9:00.1: mlx_0_1 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # rdma_device_init 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # uname 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:57.084 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:57.085 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:57.085 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:57.085 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:12:57.085 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:57.085 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:57.085 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:57.085 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:57.085 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:57.085 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:57.085 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:12:57.344 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:57.345 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:57.345 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:12:57.345 altname enp217s0f0np0 00:12:57.345 altname ens818f0np0 00:12:57.345 inet 192.168.100.8/24 scope global mlx_0_0 00:12:57.345 valid_lft forever preferred_lft forever 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:57.345 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:57.345 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:12:57.345 altname enp217s0f1np1 00:12:57.345 altname ens818f1np1 00:12:57.345 inet 192.168.100.9/24 scope global mlx_0_1 00:12:57.345 valid_lft forever preferred_lft forever 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:57.345 192.168.100.9' 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # head -n 1 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:57.345 192.168.100.9' 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:57.345 192.168.100.9' 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # tail -n +2 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # head -n 1 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=3747861 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 3747861 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # '[' -z 3747861 ']' 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:57.345 10:41:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:57.345 [2024-11-07 10:41:24.963080] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:12:57.345 [2024-11-07 10:41:24.963137] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:57.605 [2024-11-07 10:41:25.041623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:57.605 [2024-11-07 10:41:25.084600] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:57.605 [2024-11-07 10:41:25.084637] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:57.605 [2024-11-07 10:41:25.084646] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:57.605 [2024-11-07 10:41:25.084655] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:57.605 [2024-11-07 10:41:25.084662] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:57.605 [2024-11-07 10:41:25.086433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:57.605 [2024-11-07 10:41:25.086449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:57.605 [2024-11-07 10:41:25.086535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:57.605 [2024-11-07 10:41:25.086537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.173 10:41:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:58.173 10:41:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@866 -- # return 0 00:12:58.173 10:41:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:58.173 10:41:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:58.173 10:41:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:58.173 10:41:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:58.173 10:41:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:58.173 10:41:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.173 10:41:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:58.433 [2024-11-07 10:41:25.870860] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2158df0/0x215d2e0) succeed. 00:12:58.433 [2024-11-07 10:41:25.879932] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x215a480/0x219e980) succeed. 00:12:58.433 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.433 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:58.433 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.433 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:58.433 Malloc0 00:12:58.433 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.433 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:58.433 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.433 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:58.433 Malloc1 00:12:58.433 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.433 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:58.433 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.433 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:58.433 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.433 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:58.433 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.433 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:58.433 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.433 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:58.433 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.433 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:58.433 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.433 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:58.433 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.433 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:58.433 [2024-11-07 10:41:26.100496] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:58.692 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.692 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:12:58.693 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.693 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:58.693 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.693 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:12:58.693 00:12:58.693 Discovery Log Number of Records 2, Generation counter 2 00:12:58.693 =====Discovery Log Entry 0====== 00:12:58.693 trtype: rdma 00:12:58.693 adrfam: ipv4 00:12:58.693 subtype: current discovery subsystem 00:12:58.693 treq: not required 00:12:58.693 portid: 0 00:12:58.693 trsvcid: 4420 00:12:58.693 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:58.693 traddr: 192.168.100.8 00:12:58.693 eflags: explicit discovery connections, duplicate discovery information 00:12:58.693 rdma_prtype: not specified 00:12:58.693 rdma_qptype: connected 00:12:58.693 rdma_cms: rdma-cm 00:12:58.693 rdma_pkey: 0x0000 00:12:58.693 =====Discovery Log Entry 1====== 00:12:58.693 trtype: rdma 00:12:58.693 adrfam: ipv4 00:12:58.693 subtype: nvme subsystem 00:12:58.693 treq: not required 00:12:58.693 portid: 0 00:12:58.693 trsvcid: 4420 00:12:58.693 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:58.693 traddr: 192.168.100.8 00:12:58.693 eflags: none 00:12:58.693 rdma_prtype: not specified 00:12:58.693 rdma_qptype: connected 00:12:58.693 rdma_cms: rdma-cm 00:12:58.693 rdma_pkey: 0x0000 00:12:58.693 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:58.693 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:58.693 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:12:58.693 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:58.693 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:12:58.693 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:12:58.693 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:58.693 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:12:58.693 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:58.693 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:58.693 10:41:26 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:59.630 10:41:27 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:59.630 10:41:27 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # local i=0 00:12:59.630 10:41:27 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:59.630 10:41:27 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:12:59.630 10:41:27 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:12:59.630 10:41:27 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # sleep 2 00:13:02.166 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:02.166 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:02.166 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:02.166 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:13:02.166 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:02.166 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # return 0 00:13:02.166 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:02.166 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:02.166 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:02.166 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:02.166 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:02.166 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:02.166 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:02.166 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:02.166 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:02.166 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:02.166 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:02.166 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:02.166 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:02.166 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:02.166 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:13:02.166 /dev/nvme0n2 ]] 00:13:02.166 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:02.166 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:02.166 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:02.166 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:02.166 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:02.166 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:02.166 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:02.166 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:02.166 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:02.166 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:02.166 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:02.166 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:02.166 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:02.166 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:02.166 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:02.166 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:02.166 10:41:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:02.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.739 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:02.739 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # local i=0 00:13:02.739 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:02.739 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:02.739 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:02.740 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:02.740 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1233 -- # return 0 00:13:02.740 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:02.740 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:02.740 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.740 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:02.740 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.740 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:02.740 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:02.740 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:02.740 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:13:02.740 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:13:02.740 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:13:02.740 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:13:02.740 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:02.740 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:13:02.740 rmmod nvme_rdma 00:13:02.740 rmmod nvme_fabrics 00:13:02.740 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:02.740 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:13:02.740 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:13:02.740 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 3747861 ']' 00:13:02.740 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 3747861 00:13:02.740 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' -z 3747861 ']' 00:13:02.740 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # kill -0 3747861 00:13:02.740 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # uname 00:13:02.740 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:02.740 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3747861 00:13:02.999 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:02.999 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:02.999 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3747861' 00:13:02.999 killing process with pid 3747861 00:13:02.999 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@971 -- # kill 3747861 00:13:02.999 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@976 -- # wait 3747861 00:13:03.259 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:03.259 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:13:03.259 00:13:03.259 real 0m12.991s 00:13:03.259 user 0m24.533s 00:13:03.259 sys 0m5.964s 00:13:03.259 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:03.259 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:03.259 ************************************ 00:13:03.259 END TEST nvmf_nvme_cli 00:13:03.259 ************************************ 00:13:03.259 10:41:30 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:13:03.259 10:41:30 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:13:03.259 10:41:30 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:03.259 10:41:30 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:03.259 10:41:30 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:03.259 ************************************ 00:13:03.259 START TEST nvmf_auth_target 00:13:03.259 ************************************ 00:13:03.259 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:13:03.259 * Looking for test storage... 00:13:03.259 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:03.259 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:03.259 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:13:03.259 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:03.519 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:03.519 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:03.519 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:03.519 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:03.519 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:13:03.519 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:13:03.519 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:13:03.519 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:13:03.519 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:13:03.519 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:13:03.519 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:13:03.519 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:03.519 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:13:03.519 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:13:03.519 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:03.519 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:03.519 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:13:03.519 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:13:03.519 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:03.519 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:13:03.519 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:13:03.519 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:13:03.519 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:13:03.519 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:03.519 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:13:03.519 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:13:03.519 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:03.519 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:03.519 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:13:03.519 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:03.519 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:03.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.519 --rc genhtml_branch_coverage=1 00:13:03.519 --rc genhtml_function_coverage=1 00:13:03.519 --rc genhtml_legend=1 00:13:03.519 --rc geninfo_all_blocks=1 00:13:03.519 --rc geninfo_unexecuted_blocks=1 00:13:03.519 00:13:03.519 ' 00:13:03.519 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:03.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.519 --rc genhtml_branch_coverage=1 00:13:03.519 --rc genhtml_function_coverage=1 00:13:03.519 --rc genhtml_legend=1 00:13:03.519 --rc geninfo_all_blocks=1 00:13:03.519 --rc geninfo_unexecuted_blocks=1 00:13:03.519 00:13:03.519 ' 00:13:03.519 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:03.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.519 --rc genhtml_branch_coverage=1 00:13:03.519 --rc genhtml_function_coverage=1 00:13:03.519 --rc genhtml_legend=1 00:13:03.519 --rc geninfo_all_blocks=1 00:13:03.519 --rc geninfo_unexecuted_blocks=1 00:13:03.519 00:13:03.519 ' 00:13:03.519 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:03.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.519 --rc genhtml_branch_coverage=1 00:13:03.519 --rc genhtml_function_coverage=1 00:13:03.519 --rc genhtml_legend=1 00:13:03.519 --rc geninfo_all_blocks=1 00:13:03.519 --rc geninfo_unexecuted_blocks=1 00:13:03.519 00:13:03.519 ' 00:13:03.519 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:03.519 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:13:03.519 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:03.520 10:41:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:03.520 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:13:03.520 10:41:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.646 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:13:11.647 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:13:11.647 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:13:11.647 Found net devices under 0000:d9:00.0: mlx_0_0 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:13:11.647 Found net devices under 0000:d9:00.1: mlx_0_1 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # rdma_device_init 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # uname 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:11.647 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:11.648 10:41:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:13:11.648 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:11.648 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:13:11.648 altname enp217s0f0np0 00:13:11.648 altname ens818f0np0 00:13:11.648 inet 192.168.100.8/24 scope global mlx_0_0 00:13:11.648 valid_lft forever preferred_lft forever 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:13:11.648 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:11.648 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:13:11.648 altname enp217s0f1np1 00:13:11.648 altname ens818f1np1 00:13:11.648 inet 192.168.100.9/24 scope global mlx_0_1 00:13:11.648 valid_lft forever preferred_lft forever 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:13:11.648 192.168.100.9' 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:13:11.648 192.168.100.9' 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # head -n 1 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:13:11.648 192.168.100.9' 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # tail -n +2 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # head -n 1 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3752335 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3752335 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3752335 ']' 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3752364 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=20d42002e6c6b4d8c3dfb9784ccd77d691b81c6d48f8d9fe 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.s5A 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 20d42002e6c6b4d8c3dfb9784ccd77d691b81c6d48f8d9fe 0 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 20d42002e6c6b4d8c3dfb9784ccd77d691b81c6d48f8d9fe 0 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:11.648 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=20d42002e6c6b4d8c3dfb9784ccd77d691b81c6d48f8d9fe 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.s5A 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.s5A 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.s5A 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b2a3dc22ae2706ef1cf142b3a8772b004472b241887e0c90ef7d33105566bd6f 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.dCV 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b2a3dc22ae2706ef1cf142b3a8772b004472b241887e0c90ef7d33105566bd6f 3 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b2a3dc22ae2706ef1cf142b3a8772b004472b241887e0c90ef7d33105566bd6f 3 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b2a3dc22ae2706ef1cf142b3a8772b004472b241887e0c90ef7d33105566bd6f 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.dCV 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.dCV 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.dCV 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=659aa79c24f1086b3ec378f182022b6b 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.8dc 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 659aa79c24f1086b3ec378f182022b6b 1 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 659aa79c24f1086b3ec378f182022b6b 1 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=659aa79c24f1086b3ec378f182022b6b 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.8dc 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.8dc 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.8dc 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5e418c37863907729e18d3a0186b5fed114f4ea28d8f5714 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.IdO 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5e418c37863907729e18d3a0186b5fed114f4ea28d8f5714 2 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5e418c37863907729e18d3a0186b5fed114f4ea28d8f5714 2 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5e418c37863907729e18d3a0186b5fed114f4ea28d8f5714 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.IdO 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.IdO 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.IdO 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ccaff5778c78c99121ac0a3fa08e5179276449843713a4e1 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Yy8 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ccaff5778c78c99121ac0a3fa08e5179276449843713a4e1 2 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ccaff5778c78c99121ac0a3fa08e5179276449843713a4e1 2 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ccaff5778c78c99121ac0a3fa08e5179276449843713a4e1 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Yy8 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Yy8 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.Yy8 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=7dc8eee9a0bbcc8a08571c1241e6e1f0 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.JvP 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 7dc8eee9a0bbcc8a08571c1241e6e1f0 1 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 7dc8eee9a0bbcc8a08571c1241e6e1f0 1 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=7dc8eee9a0bbcc8a08571c1241e6e1f0 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.JvP 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.JvP 00:13:11.649 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.JvP 00:13:11.650 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:13:11.650 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:11.650 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:11.650 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:11.650 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:13:11.650 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:13:11.650 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:11.650 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=786cc8fb5412d59716c9ef6235f704c41ed8cb03900cb7131f445d4d2df1af1e 00:13:11.650 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:13:11.650 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.JXF 00:13:11.650 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 786cc8fb5412d59716c9ef6235f704c41ed8cb03900cb7131f445d4d2df1af1e 3 00:13:11.650 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 786cc8fb5412d59716c9ef6235f704c41ed8cb03900cb7131f445d4d2df1af1e 3 00:13:11.650 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:11.650 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:11.650 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=786cc8fb5412d59716c9ef6235f704c41ed8cb03900cb7131f445d4d2df1af1e 00:13:11.650 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:13:11.650 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:11.650 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.JXF 00:13:11.650 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.JXF 00:13:11.650 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.JXF 00:13:11.650 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:13:11.650 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3752335 00:13:11.650 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3752335 ']' 00:13:11.650 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.650 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:11.650 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.650 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:11.650 10:41:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.650 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:11.650 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:13:11.650 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3752364 /var/tmp/host.sock 00:13:11.650 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3752364 ']' 00:13:11.650 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:13:11.650 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:11.650 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:11.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:11.650 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:11.650 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.650 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:11.650 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:13:11.650 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:13:11.650 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.650 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.909 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.909 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:11.909 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.s5A 00:13:11.909 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.909 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.909 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.909 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.s5A 00:13:11.909 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.s5A 00:13:12.168 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.dCV ]] 00:13:12.168 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dCV 00:13:12.168 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.168 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.168 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.168 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dCV 00:13:12.168 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dCV 00:13:12.168 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:12.168 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.8dc 00:13:12.168 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.168 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.168 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.168 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.8dc 00:13:12.168 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.8dc 00:13:12.427 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.IdO ]] 00:13:12.428 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IdO 00:13:12.428 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.428 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.428 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.428 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IdO 00:13:12.428 10:41:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IdO 00:13:12.697 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:12.697 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Yy8 00:13:12.697 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.697 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.697 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.697 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Yy8 00:13:12.697 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Yy8 00:13:12.956 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.JvP ]] 00:13:12.956 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.JvP 00:13:12.956 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.956 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.956 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.956 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.JvP 00:13:12.956 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.JvP 00:13:12.956 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:12.956 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.JXF 00:13:12.956 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.956 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.956 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.956 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.JXF 00:13:12.956 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.JXF 00:13:13.215 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:13:13.215 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:13:13.215 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:13.215 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:13.215 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:13.215 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:13.502 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:13:13.502 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:13.502 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:13.502 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:13.502 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:13.502 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:13.503 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:13.503 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.503 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.503 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.503 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:13.503 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:13.503 10:41:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:13.787 00:13:13.787 10:41:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:13.787 10:41:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:13.787 10:41:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:13.787 10:41:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:13.787 10:41:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:13.787 10:41:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.787 10:41:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.059 10:41:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.059 10:41:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:14.059 { 00:13:14.059 "cntlid": 1, 00:13:14.059 "qid": 0, 00:13:14.059 "state": "enabled", 00:13:14.059 "thread": "nvmf_tgt_poll_group_000", 00:13:14.059 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:13:14.059 "listen_address": { 00:13:14.059 "trtype": "RDMA", 00:13:14.059 "adrfam": "IPv4", 00:13:14.059 "traddr": "192.168.100.8", 00:13:14.059 "trsvcid": "4420" 00:13:14.059 }, 00:13:14.059 "peer_address": { 00:13:14.059 "trtype": "RDMA", 00:13:14.059 "adrfam": "IPv4", 00:13:14.059 "traddr": "192.168.100.8", 00:13:14.059 "trsvcid": "59258" 00:13:14.059 }, 00:13:14.059 "auth": { 00:13:14.059 "state": "completed", 00:13:14.059 "digest": "sha256", 00:13:14.059 "dhgroup": "null" 00:13:14.059 } 00:13:14.059 } 00:13:14.059 ]' 00:13:14.059 10:41:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:14.059 10:41:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:14.059 10:41:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:14.059 10:41:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:14.059 10:41:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:14.059 10:41:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:14.059 10:41:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:14.059 10:41:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:14.318 10:41:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBkNDIwMDJlNmM2YjRkOGMzZGZiOTc4NGNjZDc3ZDY5MWI4MWM2ZDQ4ZjhkOWZleMhgjw==: --dhchap-ctrl-secret DHHC-1:03:YjJhM2RjMjJhZTI3MDZlZjFjZjE0MmIzYTg3NzJiMDA0NDcyYjI0MTg4N2UwYzkwZWY3ZDMzMTA1NTY2YmQ2ZmCD1ME=: 00:13:14.318 10:41:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MjBkNDIwMDJlNmM2YjRkOGMzZGZiOTc4NGNjZDc3ZDY5MWI4MWM2ZDQ4ZjhkOWZleMhgjw==: --dhchap-ctrl-secret DHHC-1:03:YjJhM2RjMjJhZTI3MDZlZjFjZjE0MmIzYTg3NzJiMDA0NDcyYjI0MTg4N2UwYzkwZWY3ZDMzMTA1NTY2YmQ2ZmCD1ME=: 00:13:14.887 10:41:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:14.887 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:14.887 10:41:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:14.887 10:41:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.887 10:41:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.887 10:41:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.887 10:41:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:14.887 10:41:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:14.887 10:41:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:15.147 10:41:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:13:15.147 10:41:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:15.147 10:41:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:15.147 10:41:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:15.147 10:41:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:15.147 10:41:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:15.147 10:41:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:15.147 10:41:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.147 10:41:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.147 10:41:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.147 10:41:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:15.147 10:41:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:15.147 10:41:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:15.406 00:13:15.406 10:41:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:15.406 10:41:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:15.406 10:41:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:15.665 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:15.665 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:15.665 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.665 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.665 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.665 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:15.665 { 00:13:15.665 "cntlid": 3, 00:13:15.665 "qid": 0, 00:13:15.665 "state": "enabled", 00:13:15.665 "thread": "nvmf_tgt_poll_group_000", 00:13:15.665 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:13:15.665 "listen_address": { 00:13:15.665 "trtype": "RDMA", 00:13:15.665 "adrfam": "IPv4", 00:13:15.665 "traddr": "192.168.100.8", 00:13:15.665 "trsvcid": "4420" 00:13:15.665 }, 00:13:15.665 "peer_address": { 00:13:15.665 "trtype": "RDMA", 00:13:15.665 "adrfam": "IPv4", 00:13:15.665 "traddr": "192.168.100.8", 00:13:15.665 "trsvcid": "53034" 00:13:15.665 }, 00:13:15.665 "auth": { 00:13:15.665 "state": "completed", 00:13:15.665 "digest": "sha256", 00:13:15.665 "dhgroup": "null" 00:13:15.665 } 00:13:15.665 } 00:13:15.665 ]' 00:13:15.665 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:15.665 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:15.665 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:15.665 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:15.665 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:15.665 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:15.665 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:15.665 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:15.924 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjU5YWE3OWMyNGYxMDg2YjNlYzM3OGYxODIwMjJiNmKZn+Vz: --dhchap-ctrl-secret DHHC-1:02:NWU0MThjMzc4NjM5MDc3MjllMThkM2EwMTg2YjVmZWQxMTRmNGVhMjhkOGY1NzE0Y8QjJw==: 00:13:15.925 10:41:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NjU5YWE3OWMyNGYxMDg2YjNlYzM3OGYxODIwMjJiNmKZn+Vz: --dhchap-ctrl-secret DHHC-1:02:NWU0MThjMzc4NjM5MDc3MjllMThkM2EwMTg2YjVmZWQxMTRmNGVhMjhkOGY1NzE0Y8QjJw==: 00:13:16.493 10:41:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:16.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:16.752 10:41:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:16.752 10:41:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.752 10:41:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.752 10:41:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.752 10:41:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:16.752 10:41:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:16.752 10:41:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:17.011 10:41:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:13:17.011 10:41:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:17.011 10:41:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:17.011 10:41:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:17.011 10:41:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:17.011 10:41:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:17.011 10:41:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:17.011 10:41:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.011 10:41:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.011 10:41:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.011 10:41:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:17.011 10:41:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:17.011 10:41:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:17.270 00:13:17.270 10:41:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:17.270 10:41:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:17.270 10:41:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:17.270 10:41:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.270 10:41:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:17.270 10:41:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.270 10:41:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.270 10:41:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.270 10:41:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:17.270 { 00:13:17.270 "cntlid": 5, 00:13:17.270 "qid": 0, 00:13:17.270 "state": "enabled", 00:13:17.271 "thread": "nvmf_tgt_poll_group_000", 00:13:17.271 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:13:17.271 "listen_address": { 00:13:17.271 "trtype": "RDMA", 00:13:17.271 "adrfam": "IPv4", 00:13:17.271 "traddr": "192.168.100.8", 00:13:17.271 "trsvcid": "4420" 00:13:17.271 }, 00:13:17.271 "peer_address": { 00:13:17.271 "trtype": "RDMA", 00:13:17.271 "adrfam": "IPv4", 00:13:17.271 "traddr": "192.168.100.8", 00:13:17.271 "trsvcid": "39799" 00:13:17.271 }, 00:13:17.271 "auth": { 00:13:17.271 "state": "completed", 00:13:17.271 "digest": "sha256", 00:13:17.271 "dhgroup": "null" 00:13:17.271 } 00:13:17.271 } 00:13:17.271 ]' 00:13:17.271 10:41:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:17.529 10:41:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:17.529 10:41:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:17.529 10:41:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:17.529 10:41:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:17.529 10:41:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:17.529 10:41:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:17.529 10:41:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:17.788 10:41:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2NhZmY1Nzc4Yzc4Yzk5MTIxYWMwYTNmYTA4ZTUxNzkyNzY0NDk4NDM3MTNhNGUxvR/3DQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjOGVlZTlhMGJiY2M4YTA4NTcxYzEyNDFlNmUxZjDW2pwW: 00:13:17.788 10:41:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Y2NhZmY1Nzc4Yzc4Yzk5MTIxYWMwYTNmYTA4ZTUxNzkyNzY0NDk4NDM3MTNhNGUxvR/3DQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjOGVlZTlhMGJiY2M4YTA4NTcxYzEyNDFlNmUxZjDW2pwW: 00:13:18.355 10:41:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:18.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:18.355 10:41:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:18.355 10:41:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.355 10:41:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.355 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.355 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:18.355 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:18.355 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:18.614 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:13:18.614 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:18.614 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:18.614 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:18.614 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:18.614 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:18.614 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:13:18.614 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.614 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.614 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.614 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:18.614 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:18.614 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:18.874 00:13:18.874 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:18.874 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:18.874 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:19.133 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:19.133 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:19.133 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.133 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.133 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.133 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:19.133 { 00:13:19.133 "cntlid": 7, 00:13:19.133 "qid": 0, 00:13:19.133 "state": "enabled", 00:13:19.133 "thread": "nvmf_tgt_poll_group_000", 00:13:19.133 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:13:19.133 "listen_address": { 00:13:19.133 "trtype": "RDMA", 00:13:19.133 "adrfam": "IPv4", 00:13:19.133 "traddr": "192.168.100.8", 00:13:19.133 "trsvcid": "4420" 00:13:19.133 }, 00:13:19.133 "peer_address": { 00:13:19.133 "trtype": "RDMA", 00:13:19.133 "adrfam": "IPv4", 00:13:19.133 "traddr": "192.168.100.8", 00:13:19.133 "trsvcid": "41214" 00:13:19.133 }, 00:13:19.133 "auth": { 00:13:19.133 "state": "completed", 00:13:19.133 "digest": "sha256", 00:13:19.133 "dhgroup": "null" 00:13:19.133 } 00:13:19.133 } 00:13:19.133 ]' 00:13:19.133 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:19.133 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:19.133 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:19.133 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:19.133 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:19.392 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:19.392 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:19.393 10:41:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:19.393 10:41:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzg2Y2M4ZmI1NDEyZDU5NzE2YzllZjYyMzVmNzA0YzQxZWQ4Y2IwMzkwMGNiNzEzMWY0NDVkNGQyZGYxYWYxZQqRKoI=: 00:13:19.393 10:41:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:Nzg2Y2M4ZmI1NDEyZDU5NzE2YzllZjYyMzVmNzA0YzQxZWQ4Y2IwMzkwMGNiNzEzMWY0NDVkNGQyZGYxYWYxZQqRKoI=: 00:13:20.331 10:41:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:20.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:20.331 10:41:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:20.331 10:41:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.331 10:41:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.331 10:41:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.331 10:41:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:20.331 10:41:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:20.331 10:41:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:20.331 10:41:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:20.331 10:41:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:13:20.331 10:41:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:20.331 10:41:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:20.331 10:41:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:20.331 10:41:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:20.331 10:41:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:20.331 10:41:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:20.331 10:41:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.331 10:41:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.331 10:41:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.331 10:41:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:20.331 10:41:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:20.331 10:41:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:20.590 00:13:20.590 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:20.590 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:20.590 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:20.849 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:20.849 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:20.849 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.849 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.849 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.849 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:20.850 { 00:13:20.850 "cntlid": 9, 00:13:20.850 "qid": 0, 00:13:20.850 "state": "enabled", 00:13:20.850 "thread": "nvmf_tgt_poll_group_000", 00:13:20.850 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:13:20.850 "listen_address": { 00:13:20.850 "trtype": "RDMA", 00:13:20.850 "adrfam": "IPv4", 00:13:20.850 "traddr": "192.168.100.8", 00:13:20.850 "trsvcid": "4420" 00:13:20.850 }, 00:13:20.850 "peer_address": { 00:13:20.850 "trtype": "RDMA", 00:13:20.850 "adrfam": "IPv4", 00:13:20.850 "traddr": "192.168.100.8", 00:13:20.850 "trsvcid": "56662" 00:13:20.850 }, 00:13:20.850 "auth": { 00:13:20.850 "state": "completed", 00:13:20.850 "digest": "sha256", 00:13:20.850 "dhgroup": "ffdhe2048" 00:13:20.850 } 00:13:20.850 } 00:13:20.850 ]' 00:13:20.850 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:20.850 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:20.850 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:21.109 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:21.109 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:21.109 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:21.109 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:21.109 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:21.109 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBkNDIwMDJlNmM2YjRkOGMzZGZiOTc4NGNjZDc3ZDY5MWI4MWM2ZDQ4ZjhkOWZleMhgjw==: --dhchap-ctrl-secret DHHC-1:03:YjJhM2RjMjJhZTI3MDZlZjFjZjE0MmIzYTg3NzJiMDA0NDcyYjI0MTg4N2UwYzkwZWY3ZDMzMTA1NTY2YmQ2ZmCD1ME=: 00:13:21.109 10:41:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MjBkNDIwMDJlNmM2YjRkOGMzZGZiOTc4NGNjZDc3ZDY5MWI4MWM2ZDQ4ZjhkOWZleMhgjw==: --dhchap-ctrl-secret DHHC-1:03:YjJhM2RjMjJhZTI3MDZlZjFjZjE0MmIzYTg3NzJiMDA0NDcyYjI0MTg4N2UwYzkwZWY3ZDMzMTA1NTY2YmQ2ZmCD1ME=: 00:13:22.047 10:41:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:22.047 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:22.047 10:41:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:22.047 10:41:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.047 10:41:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.047 10:41:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.047 10:41:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:22.047 10:41:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:22.047 10:41:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:22.047 10:41:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:13:22.047 10:41:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:22.047 10:41:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:22.047 10:41:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:22.047 10:41:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:22.047 10:41:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:22.047 10:41:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:22.047 10:41:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.047 10:41:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.047 10:41:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.047 10:41:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:22.047 10:41:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:22.047 10:41:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:22.306 00:13:22.306 10:41:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:22.306 10:41:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:22.306 10:41:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:22.565 10:41:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:22.565 10:41:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:22.565 10:41:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.565 10:41:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.565 10:41:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.565 10:41:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:22.565 { 00:13:22.565 "cntlid": 11, 00:13:22.565 "qid": 0, 00:13:22.565 "state": "enabled", 00:13:22.565 "thread": "nvmf_tgt_poll_group_000", 00:13:22.565 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:13:22.565 "listen_address": { 00:13:22.565 "trtype": "RDMA", 00:13:22.565 "adrfam": "IPv4", 00:13:22.565 "traddr": "192.168.100.8", 00:13:22.565 "trsvcid": "4420" 00:13:22.565 }, 00:13:22.565 "peer_address": { 00:13:22.565 "trtype": "RDMA", 00:13:22.565 "adrfam": "IPv4", 00:13:22.565 "traddr": "192.168.100.8", 00:13:22.565 "trsvcid": "57545" 00:13:22.565 }, 00:13:22.565 "auth": { 00:13:22.565 "state": "completed", 00:13:22.565 "digest": "sha256", 00:13:22.565 "dhgroup": "ffdhe2048" 00:13:22.565 } 00:13:22.565 } 00:13:22.565 ]' 00:13:22.565 10:41:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:22.565 10:41:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:22.565 10:41:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:22.824 10:41:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:22.824 10:41:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:22.824 10:41:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:22.824 10:41:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:22.824 10:41:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:23.084 10:41:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjU5YWE3OWMyNGYxMDg2YjNlYzM3OGYxODIwMjJiNmKZn+Vz: --dhchap-ctrl-secret DHHC-1:02:NWU0MThjMzc4NjM5MDc3MjllMThkM2EwMTg2YjVmZWQxMTRmNGVhMjhkOGY1NzE0Y8QjJw==: 00:13:23.084 10:41:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NjU5YWE3OWMyNGYxMDg2YjNlYzM3OGYxODIwMjJiNmKZn+Vz: --dhchap-ctrl-secret DHHC-1:02:NWU0MThjMzc4NjM5MDc3MjllMThkM2EwMTg2YjVmZWQxMTRmNGVhMjhkOGY1NzE0Y8QjJw==: 00:13:23.653 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:23.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:23.653 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:23.653 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.653 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.653 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.653 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:23.653 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:23.653 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:23.912 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:13:23.912 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:23.912 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:23.912 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:23.912 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:23.912 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:23.912 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:23.912 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.912 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.912 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.912 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:23.912 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:23.912 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:24.172 00:13:24.172 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:24.172 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:24.172 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:24.432 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:24.432 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:24.432 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.432 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.432 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.432 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:24.432 { 00:13:24.432 "cntlid": 13, 00:13:24.432 "qid": 0, 00:13:24.432 "state": "enabled", 00:13:24.432 "thread": "nvmf_tgt_poll_group_000", 00:13:24.432 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:13:24.432 "listen_address": { 00:13:24.432 "trtype": "RDMA", 00:13:24.432 "adrfam": "IPv4", 00:13:24.432 "traddr": "192.168.100.8", 00:13:24.432 "trsvcid": "4420" 00:13:24.432 }, 00:13:24.432 "peer_address": { 00:13:24.432 "trtype": "RDMA", 00:13:24.432 "adrfam": "IPv4", 00:13:24.432 "traddr": "192.168.100.8", 00:13:24.432 "trsvcid": "46978" 00:13:24.432 }, 00:13:24.432 "auth": { 00:13:24.432 "state": "completed", 00:13:24.432 "digest": "sha256", 00:13:24.432 "dhgroup": "ffdhe2048" 00:13:24.432 } 00:13:24.432 } 00:13:24.432 ]' 00:13:24.432 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:24.432 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:24.432 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:24.432 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:24.432 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:24.432 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:24.432 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:24.432 10:41:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:24.691 10:41:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2NhZmY1Nzc4Yzc4Yzk5MTIxYWMwYTNmYTA4ZTUxNzkyNzY0NDk4NDM3MTNhNGUxvR/3DQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjOGVlZTlhMGJiY2M4YTA4NTcxYzEyNDFlNmUxZjDW2pwW: 00:13:24.691 10:41:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Y2NhZmY1Nzc4Yzc4Yzk5MTIxYWMwYTNmYTA4ZTUxNzkyNzY0NDk4NDM3MTNhNGUxvR/3DQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjOGVlZTlhMGJiY2M4YTA4NTcxYzEyNDFlNmUxZjDW2pwW: 00:13:25.260 10:41:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:25.260 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:25.260 10:41:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:25.260 10:41:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.260 10:41:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.260 10:41:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.260 10:41:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:25.260 10:41:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:25.260 10:41:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:25.519 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:13:25.519 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:25.519 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:25.519 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:25.519 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:25.519 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:25.519 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:13:25.519 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.519 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.519 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.519 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:25.519 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:25.519 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:25.779 00:13:25.779 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:25.779 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:25.779 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:26.038 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:26.038 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:26.038 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.038 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.038 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.038 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:26.038 { 00:13:26.038 "cntlid": 15, 00:13:26.038 "qid": 0, 00:13:26.038 "state": "enabled", 00:13:26.038 "thread": "nvmf_tgt_poll_group_000", 00:13:26.038 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:13:26.038 "listen_address": { 00:13:26.038 "trtype": "RDMA", 00:13:26.038 "adrfam": "IPv4", 00:13:26.038 "traddr": "192.168.100.8", 00:13:26.038 "trsvcid": "4420" 00:13:26.038 }, 00:13:26.038 "peer_address": { 00:13:26.038 "trtype": "RDMA", 00:13:26.038 "adrfam": "IPv4", 00:13:26.038 "traddr": "192.168.100.8", 00:13:26.038 "trsvcid": "48855" 00:13:26.038 }, 00:13:26.038 "auth": { 00:13:26.038 "state": "completed", 00:13:26.038 "digest": "sha256", 00:13:26.038 "dhgroup": "ffdhe2048" 00:13:26.038 } 00:13:26.038 } 00:13:26.038 ]' 00:13:26.038 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:26.038 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:26.038 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:26.038 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:26.038 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:26.038 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:26.038 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:26.038 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:26.297 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzg2Y2M4ZmI1NDEyZDU5NzE2YzllZjYyMzVmNzA0YzQxZWQ4Y2IwMzkwMGNiNzEzMWY0NDVkNGQyZGYxYWYxZQqRKoI=: 00:13:26.297 10:41:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:Nzg2Y2M4ZmI1NDEyZDU5NzE2YzllZjYyMzVmNzA0YzQxZWQ4Y2IwMzkwMGNiNzEzMWY0NDVkNGQyZGYxYWYxZQqRKoI=: 00:13:26.865 10:41:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:27.125 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:27.125 10:41:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:27.125 10:41:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.125 10:41:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.125 10:41:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.125 10:41:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:27.125 10:41:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:27.125 10:41:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:27.125 10:41:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:27.384 10:41:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:13:27.384 10:41:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:27.384 10:41:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:27.384 10:41:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:27.384 10:41:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:27.384 10:41:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:27.384 10:41:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:27.384 10:41:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.384 10:41:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.384 10:41:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.384 10:41:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:27.384 10:41:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:27.385 10:41:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:27.644 00:13:27.644 10:41:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:27.644 10:41:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:27.644 10:41:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:27.644 10:41:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:27.644 10:41:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:27.644 10:41:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.644 10:41:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.644 10:41:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.644 10:41:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:27.644 { 00:13:27.644 "cntlid": 17, 00:13:27.644 "qid": 0, 00:13:27.644 "state": "enabled", 00:13:27.644 "thread": "nvmf_tgt_poll_group_000", 00:13:27.644 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:13:27.644 "listen_address": { 00:13:27.644 "trtype": "RDMA", 00:13:27.644 "adrfam": "IPv4", 00:13:27.644 "traddr": "192.168.100.8", 00:13:27.644 "trsvcid": "4420" 00:13:27.644 }, 00:13:27.644 "peer_address": { 00:13:27.644 "trtype": "RDMA", 00:13:27.644 "adrfam": "IPv4", 00:13:27.644 "traddr": "192.168.100.8", 00:13:27.644 "trsvcid": "35546" 00:13:27.644 }, 00:13:27.644 "auth": { 00:13:27.644 "state": "completed", 00:13:27.644 "digest": "sha256", 00:13:27.644 "dhgroup": "ffdhe3072" 00:13:27.644 } 00:13:27.644 } 00:13:27.644 ]' 00:13:27.903 10:41:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:27.903 10:41:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:27.903 10:41:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:27.903 10:41:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:27.903 10:41:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:27.903 10:41:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:27.903 10:41:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:27.903 10:41:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:28.163 10:41:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBkNDIwMDJlNmM2YjRkOGMzZGZiOTc4NGNjZDc3ZDY5MWI4MWM2ZDQ4ZjhkOWZleMhgjw==: --dhchap-ctrl-secret DHHC-1:03:YjJhM2RjMjJhZTI3MDZlZjFjZjE0MmIzYTg3NzJiMDA0NDcyYjI0MTg4N2UwYzkwZWY3ZDMzMTA1NTY2YmQ2ZmCD1ME=: 00:13:28.163 10:41:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MjBkNDIwMDJlNmM2YjRkOGMzZGZiOTc4NGNjZDc3ZDY5MWI4MWM2ZDQ4ZjhkOWZleMhgjw==: --dhchap-ctrl-secret DHHC-1:03:YjJhM2RjMjJhZTI3MDZlZjFjZjE0MmIzYTg3NzJiMDA0NDcyYjI0MTg4N2UwYzkwZWY3ZDMzMTA1NTY2YmQ2ZmCD1ME=: 00:13:28.731 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:28.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:28.731 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:28.731 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.731 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.731 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.731 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:28.731 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:28.731 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:28.990 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:13:28.990 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:28.990 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:28.990 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:28.990 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:28.990 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:28.990 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:28.990 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.990 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.990 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.990 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:28.990 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:28.990 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:29.249 00:13:29.249 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:29.249 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:29.249 10:41:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:29.509 10:41:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:29.509 10:41:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:29.509 10:41:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.509 10:41:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.509 10:41:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.509 10:41:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:29.509 { 00:13:29.509 "cntlid": 19, 00:13:29.509 "qid": 0, 00:13:29.509 "state": "enabled", 00:13:29.509 "thread": "nvmf_tgt_poll_group_000", 00:13:29.509 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:13:29.509 "listen_address": { 00:13:29.509 "trtype": "RDMA", 00:13:29.509 "adrfam": "IPv4", 00:13:29.509 "traddr": "192.168.100.8", 00:13:29.509 "trsvcid": "4420" 00:13:29.509 }, 00:13:29.509 "peer_address": { 00:13:29.509 "trtype": "RDMA", 00:13:29.509 "adrfam": "IPv4", 00:13:29.509 "traddr": "192.168.100.8", 00:13:29.509 "trsvcid": "43039" 00:13:29.509 }, 00:13:29.509 "auth": { 00:13:29.509 "state": "completed", 00:13:29.509 "digest": "sha256", 00:13:29.509 "dhgroup": "ffdhe3072" 00:13:29.509 } 00:13:29.509 } 00:13:29.509 ]' 00:13:29.509 10:41:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:29.509 10:41:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:29.509 10:41:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:29.509 10:41:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:29.509 10:41:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:29.768 10:41:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:29.768 10:41:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:29.768 10:41:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:29.768 10:41:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjU5YWE3OWMyNGYxMDg2YjNlYzM3OGYxODIwMjJiNmKZn+Vz: --dhchap-ctrl-secret DHHC-1:02:NWU0MThjMzc4NjM5MDc3MjllMThkM2EwMTg2YjVmZWQxMTRmNGVhMjhkOGY1NzE0Y8QjJw==: 00:13:29.768 10:41:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NjU5YWE3OWMyNGYxMDg2YjNlYzM3OGYxODIwMjJiNmKZn+Vz: --dhchap-ctrl-secret DHHC-1:02:NWU0MThjMzc4NjM5MDc3MjllMThkM2EwMTg2YjVmZWQxMTRmNGVhMjhkOGY1NzE0Y8QjJw==: 00:13:30.337 10:41:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:30.595 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:30.595 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:30.595 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.595 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.595 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.595 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:30.595 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:30.595 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:30.855 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:13:30.855 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:30.855 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:30.855 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:30.855 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:30.855 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:30.855 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:30.855 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.855 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.855 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.855 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:30.855 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:30.855 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:31.114 00:13:31.114 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:31.114 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:31.114 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:31.114 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:31.114 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:31.114 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.114 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.373 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.373 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:31.373 { 00:13:31.373 "cntlid": 21, 00:13:31.373 "qid": 0, 00:13:31.373 "state": "enabled", 00:13:31.373 "thread": "nvmf_tgt_poll_group_000", 00:13:31.373 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:13:31.373 "listen_address": { 00:13:31.373 "trtype": "RDMA", 00:13:31.373 "adrfam": "IPv4", 00:13:31.373 "traddr": "192.168.100.8", 00:13:31.373 "trsvcid": "4420" 00:13:31.373 }, 00:13:31.373 "peer_address": { 00:13:31.373 "trtype": "RDMA", 00:13:31.373 "adrfam": "IPv4", 00:13:31.373 "traddr": "192.168.100.8", 00:13:31.373 "trsvcid": "34589" 00:13:31.373 }, 00:13:31.373 "auth": { 00:13:31.373 "state": "completed", 00:13:31.373 "digest": "sha256", 00:13:31.373 "dhgroup": "ffdhe3072" 00:13:31.373 } 00:13:31.373 } 00:13:31.373 ]' 00:13:31.373 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:31.373 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:31.373 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:31.373 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:31.373 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:31.373 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:31.373 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:31.373 10:41:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:31.633 10:41:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2NhZmY1Nzc4Yzc4Yzk5MTIxYWMwYTNmYTA4ZTUxNzkyNzY0NDk4NDM3MTNhNGUxvR/3DQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjOGVlZTlhMGJiY2M4YTA4NTcxYzEyNDFlNmUxZjDW2pwW: 00:13:31.633 10:41:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Y2NhZmY1Nzc4Yzc4Yzk5MTIxYWMwYTNmYTA4ZTUxNzkyNzY0NDk4NDM3MTNhNGUxvR/3DQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjOGVlZTlhMGJiY2M4YTA4NTcxYzEyNDFlNmUxZjDW2pwW: 00:13:32.200 10:41:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:32.200 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:32.200 10:41:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:32.200 10:41:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.200 10:41:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.200 10:41:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.200 10:41:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:32.200 10:41:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:32.200 10:41:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:32.459 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:13:32.459 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:32.459 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:32.459 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:32.460 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:32.460 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:32.460 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:13:32.460 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.460 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.460 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.460 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:32.460 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:32.460 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:32.719 00:13:32.719 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:32.719 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:32.719 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:32.978 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:32.978 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:32.978 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.978 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.978 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.978 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:32.978 { 00:13:32.978 "cntlid": 23, 00:13:32.978 "qid": 0, 00:13:32.978 "state": "enabled", 00:13:32.978 "thread": "nvmf_tgt_poll_group_000", 00:13:32.978 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:13:32.978 "listen_address": { 00:13:32.978 "trtype": "RDMA", 00:13:32.978 "adrfam": "IPv4", 00:13:32.978 "traddr": "192.168.100.8", 00:13:32.978 "trsvcid": "4420" 00:13:32.978 }, 00:13:32.978 "peer_address": { 00:13:32.978 "trtype": "RDMA", 00:13:32.978 "adrfam": "IPv4", 00:13:32.978 "traddr": "192.168.100.8", 00:13:32.978 "trsvcid": "48069" 00:13:32.978 }, 00:13:32.978 "auth": { 00:13:32.978 "state": "completed", 00:13:32.978 "digest": "sha256", 00:13:32.978 "dhgroup": "ffdhe3072" 00:13:32.978 } 00:13:32.978 } 00:13:32.978 ]' 00:13:32.978 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:32.978 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:32.978 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:32.978 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:32.978 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:33.238 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:33.238 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:33.238 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:33.238 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzg2Y2M4ZmI1NDEyZDU5NzE2YzllZjYyMzVmNzA0YzQxZWQ4Y2IwMzkwMGNiNzEzMWY0NDVkNGQyZGYxYWYxZQqRKoI=: 00:13:33.238 10:42:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:Nzg2Y2M4ZmI1NDEyZDU5NzE2YzllZjYyMzVmNzA0YzQxZWQ4Y2IwMzkwMGNiNzEzMWY0NDVkNGQyZGYxYWYxZQqRKoI=: 00:13:34.176 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:34.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:34.176 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:34.176 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.176 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.176 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.176 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:34.176 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:34.176 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:34.176 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:34.176 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:13:34.176 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:34.176 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:34.176 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:34.176 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:34.176 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:34.176 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:34.176 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.176 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.176 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.176 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:34.176 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:34.176 10:42:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:34.435 00:13:34.435 10:42:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:34.435 10:42:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:34.435 10:42:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:34.694 10:42:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:34.694 10:42:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:34.694 10:42:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.694 10:42:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.694 10:42:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.694 10:42:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:34.694 { 00:13:34.694 "cntlid": 25, 00:13:34.694 "qid": 0, 00:13:34.694 "state": "enabled", 00:13:34.694 "thread": "nvmf_tgt_poll_group_000", 00:13:34.694 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:13:34.694 "listen_address": { 00:13:34.694 "trtype": "RDMA", 00:13:34.694 "adrfam": "IPv4", 00:13:34.694 "traddr": "192.168.100.8", 00:13:34.694 "trsvcid": "4420" 00:13:34.694 }, 00:13:34.694 "peer_address": { 00:13:34.694 "trtype": "RDMA", 00:13:34.694 "adrfam": "IPv4", 00:13:34.694 "traddr": "192.168.100.8", 00:13:34.694 "trsvcid": "57010" 00:13:34.694 }, 00:13:34.694 "auth": { 00:13:34.694 "state": "completed", 00:13:34.694 "digest": "sha256", 00:13:34.694 "dhgroup": "ffdhe4096" 00:13:34.694 } 00:13:34.694 } 00:13:34.694 ]' 00:13:34.694 10:42:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:34.694 10:42:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:34.694 10:42:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:34.953 10:42:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:34.953 10:42:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:34.953 10:42:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:34.953 10:42:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:34.953 10:42:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:34.953 10:42:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBkNDIwMDJlNmM2YjRkOGMzZGZiOTc4NGNjZDc3ZDY5MWI4MWM2ZDQ4ZjhkOWZleMhgjw==: --dhchap-ctrl-secret DHHC-1:03:YjJhM2RjMjJhZTI3MDZlZjFjZjE0MmIzYTg3NzJiMDA0NDcyYjI0MTg4N2UwYzkwZWY3ZDMzMTA1NTY2YmQ2ZmCD1ME=: 00:13:34.953 10:42:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MjBkNDIwMDJlNmM2YjRkOGMzZGZiOTc4NGNjZDc3ZDY5MWI4MWM2ZDQ4ZjhkOWZleMhgjw==: --dhchap-ctrl-secret DHHC-1:03:YjJhM2RjMjJhZTI3MDZlZjFjZjE0MmIzYTg3NzJiMDA0NDcyYjI0MTg4N2UwYzkwZWY3ZDMzMTA1NTY2YmQ2ZmCD1ME=: 00:13:35.891 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:35.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:35.891 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:35.891 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.891 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.891 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.891 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:35.891 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:35.891 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:36.151 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:13:36.151 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:36.151 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:36.151 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:36.151 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:36.151 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:36.151 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:36.151 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.151 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.151 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.151 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:36.151 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:36.151 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:36.418 00:13:36.418 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:36.418 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:36.418 10:42:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:36.418 10:42:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:36.418 10:42:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:36.418 10:42:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.418 10:42:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.418 10:42:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.418 10:42:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:36.418 { 00:13:36.418 "cntlid": 27, 00:13:36.418 "qid": 0, 00:13:36.418 "state": "enabled", 00:13:36.418 "thread": "nvmf_tgt_poll_group_000", 00:13:36.418 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:13:36.418 "listen_address": { 00:13:36.418 "trtype": "RDMA", 00:13:36.418 "adrfam": "IPv4", 00:13:36.418 "traddr": "192.168.100.8", 00:13:36.418 "trsvcid": "4420" 00:13:36.418 }, 00:13:36.418 "peer_address": { 00:13:36.418 "trtype": "RDMA", 00:13:36.418 "adrfam": "IPv4", 00:13:36.418 "traddr": "192.168.100.8", 00:13:36.418 "trsvcid": "43992" 00:13:36.418 }, 00:13:36.418 "auth": { 00:13:36.418 "state": "completed", 00:13:36.418 "digest": "sha256", 00:13:36.418 "dhgroup": "ffdhe4096" 00:13:36.418 } 00:13:36.418 } 00:13:36.418 ]' 00:13:36.418 10:42:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:36.678 10:42:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:36.678 10:42:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:36.678 10:42:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:36.678 10:42:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:36.678 10:42:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:36.678 10:42:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:36.678 10:42:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:36.937 10:42:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjU5YWE3OWMyNGYxMDg2YjNlYzM3OGYxODIwMjJiNmKZn+Vz: --dhchap-ctrl-secret DHHC-1:02:NWU0MThjMzc4NjM5MDc3MjllMThkM2EwMTg2YjVmZWQxMTRmNGVhMjhkOGY1NzE0Y8QjJw==: 00:13:36.937 10:42:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NjU5YWE3OWMyNGYxMDg2YjNlYzM3OGYxODIwMjJiNmKZn+Vz: --dhchap-ctrl-secret DHHC-1:02:NWU0MThjMzc4NjM5MDc3MjllMThkM2EwMTg2YjVmZWQxMTRmNGVhMjhkOGY1NzE0Y8QjJw==: 00:13:37.506 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:37.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:37.506 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:37.506 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.506 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.506 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.506 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:37.506 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:37.506 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:37.765 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:13:37.765 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:37.765 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:37.765 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:37.765 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:37.765 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:37.765 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:37.765 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.765 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.765 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.765 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:37.765 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:37.765 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:38.024 00:13:38.024 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:38.024 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:38.025 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:38.284 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:38.284 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:38.284 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.284 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.284 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.284 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:38.284 { 00:13:38.284 "cntlid": 29, 00:13:38.284 "qid": 0, 00:13:38.284 "state": "enabled", 00:13:38.284 "thread": "nvmf_tgt_poll_group_000", 00:13:38.284 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:13:38.284 "listen_address": { 00:13:38.284 "trtype": "RDMA", 00:13:38.284 "adrfam": "IPv4", 00:13:38.284 "traddr": "192.168.100.8", 00:13:38.284 "trsvcid": "4420" 00:13:38.284 }, 00:13:38.284 "peer_address": { 00:13:38.284 "trtype": "RDMA", 00:13:38.284 "adrfam": "IPv4", 00:13:38.284 "traddr": "192.168.100.8", 00:13:38.284 "trsvcid": "54791" 00:13:38.284 }, 00:13:38.284 "auth": { 00:13:38.284 "state": "completed", 00:13:38.284 "digest": "sha256", 00:13:38.284 "dhgroup": "ffdhe4096" 00:13:38.284 } 00:13:38.284 } 00:13:38.284 ]' 00:13:38.284 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:38.284 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:38.284 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:38.284 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:38.284 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:38.543 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:38.543 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:38.543 10:42:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:38.543 10:42:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2NhZmY1Nzc4Yzc4Yzk5MTIxYWMwYTNmYTA4ZTUxNzkyNzY0NDk4NDM3MTNhNGUxvR/3DQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjOGVlZTlhMGJiY2M4YTA4NTcxYzEyNDFlNmUxZjDW2pwW: 00:13:38.543 10:42:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Y2NhZmY1Nzc4Yzc4Yzk5MTIxYWMwYTNmYTA4ZTUxNzkyNzY0NDk4NDM3MTNhNGUxvR/3DQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjOGVlZTlhMGJiY2M4YTA4NTcxYzEyNDFlNmUxZjDW2pwW: 00:13:39.112 10:42:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:39.371 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:39.371 10:42:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:39.371 10:42:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.371 10:42:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.371 10:42:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.371 10:42:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:39.371 10:42:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:39.371 10:42:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:39.631 10:42:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:13:39.631 10:42:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:39.631 10:42:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:39.631 10:42:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:39.631 10:42:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:39.631 10:42:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:39.631 10:42:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:13:39.631 10:42:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.631 10:42:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.631 10:42:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.631 10:42:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:39.631 10:42:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:39.631 10:42:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:39.890 00:13:39.890 10:42:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:39.890 10:42:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:39.890 10:42:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:40.149 10:42:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:40.149 10:42:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:40.149 10:42:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.149 10:42:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.149 10:42:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.149 10:42:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:40.149 { 00:13:40.149 "cntlid": 31, 00:13:40.149 "qid": 0, 00:13:40.149 "state": "enabled", 00:13:40.149 "thread": "nvmf_tgt_poll_group_000", 00:13:40.149 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:13:40.149 "listen_address": { 00:13:40.149 "trtype": "RDMA", 00:13:40.149 "adrfam": "IPv4", 00:13:40.149 "traddr": "192.168.100.8", 00:13:40.149 "trsvcid": "4420" 00:13:40.149 }, 00:13:40.149 "peer_address": { 00:13:40.149 "trtype": "RDMA", 00:13:40.149 "adrfam": "IPv4", 00:13:40.149 "traddr": "192.168.100.8", 00:13:40.149 "trsvcid": "46170" 00:13:40.149 }, 00:13:40.149 "auth": { 00:13:40.149 "state": "completed", 00:13:40.149 "digest": "sha256", 00:13:40.149 "dhgroup": "ffdhe4096" 00:13:40.149 } 00:13:40.149 } 00:13:40.149 ]' 00:13:40.149 10:42:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:40.149 10:42:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:40.149 10:42:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:40.149 10:42:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:40.150 10:42:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:40.150 10:42:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:40.150 10:42:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:40.150 10:42:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:40.408 10:42:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzg2Y2M4ZmI1NDEyZDU5NzE2YzllZjYyMzVmNzA0YzQxZWQ4Y2IwMzkwMGNiNzEzMWY0NDVkNGQyZGYxYWYxZQqRKoI=: 00:13:40.408 10:42:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:Nzg2Y2M4ZmI1NDEyZDU5NzE2YzllZjYyMzVmNzA0YzQxZWQ4Y2IwMzkwMGNiNzEzMWY0NDVkNGQyZGYxYWYxZQqRKoI=: 00:13:40.976 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:40.976 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:40.976 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:40.976 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.976 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.976 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.976 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:40.976 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:40.976 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:40.976 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:41.236 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:13:41.236 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:41.236 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:41.236 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:41.236 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:41.236 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:41.236 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:41.236 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.236 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.236 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.236 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:41.236 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:41.236 10:42:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:41.804 00:13:41.804 10:42:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:41.804 10:42:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:41.804 10:42:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:41.804 10:42:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:41.804 10:42:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:41.804 10:42:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.804 10:42:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.804 10:42:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.804 10:42:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:41.804 { 00:13:41.804 "cntlid": 33, 00:13:41.804 "qid": 0, 00:13:41.804 "state": "enabled", 00:13:41.804 "thread": "nvmf_tgt_poll_group_000", 00:13:41.804 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:13:41.804 "listen_address": { 00:13:41.804 "trtype": "RDMA", 00:13:41.804 "adrfam": "IPv4", 00:13:41.804 "traddr": "192.168.100.8", 00:13:41.804 "trsvcid": "4420" 00:13:41.804 }, 00:13:41.804 "peer_address": { 00:13:41.804 "trtype": "RDMA", 00:13:41.804 "adrfam": "IPv4", 00:13:41.804 "traddr": "192.168.100.8", 00:13:41.804 "trsvcid": "58749" 00:13:41.804 }, 00:13:41.804 "auth": { 00:13:41.804 "state": "completed", 00:13:41.804 "digest": "sha256", 00:13:41.804 "dhgroup": "ffdhe6144" 00:13:41.804 } 00:13:41.804 } 00:13:41.804 ]' 00:13:41.804 10:42:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:41.804 10:42:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:41.804 10:42:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:42.118 10:42:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:42.118 10:42:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:42.118 10:42:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:42.118 10:42:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:42.118 10:42:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:42.118 10:42:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBkNDIwMDJlNmM2YjRkOGMzZGZiOTc4NGNjZDc3ZDY5MWI4MWM2ZDQ4ZjhkOWZleMhgjw==: --dhchap-ctrl-secret DHHC-1:03:YjJhM2RjMjJhZTI3MDZlZjFjZjE0MmIzYTg3NzJiMDA0NDcyYjI0MTg4N2UwYzkwZWY3ZDMzMTA1NTY2YmQ2ZmCD1ME=: 00:13:42.118 10:42:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MjBkNDIwMDJlNmM2YjRkOGMzZGZiOTc4NGNjZDc3ZDY5MWI4MWM2ZDQ4ZjhkOWZleMhgjw==: --dhchap-ctrl-secret DHHC-1:03:YjJhM2RjMjJhZTI3MDZlZjFjZjE0MmIzYTg3NzJiMDA0NDcyYjI0MTg4N2UwYzkwZWY3ZDMzMTA1NTY2YmQ2ZmCD1ME=: 00:13:42.735 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:42.993 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:42.993 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:42.993 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.993 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.993 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.993 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:42.993 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:42.993 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:42.993 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:13:42.994 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:42.994 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:42.994 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:42.994 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:42.994 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:42.994 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:42.994 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.994 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.994 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.994 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:42.994 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:42.994 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:43.561 00:13:43.561 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:43.561 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:43.561 10:42:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:43.561 10:42:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:43.562 10:42:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:43.562 10:42:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.562 10:42:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.562 10:42:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.562 10:42:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:43.562 { 00:13:43.562 "cntlid": 35, 00:13:43.562 "qid": 0, 00:13:43.562 "state": "enabled", 00:13:43.562 "thread": "nvmf_tgt_poll_group_000", 00:13:43.562 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:13:43.562 "listen_address": { 00:13:43.562 "trtype": "RDMA", 00:13:43.562 "adrfam": "IPv4", 00:13:43.562 "traddr": "192.168.100.8", 00:13:43.562 "trsvcid": "4420" 00:13:43.562 }, 00:13:43.562 "peer_address": { 00:13:43.562 "trtype": "RDMA", 00:13:43.562 "adrfam": "IPv4", 00:13:43.562 "traddr": "192.168.100.8", 00:13:43.562 "trsvcid": "35631" 00:13:43.562 }, 00:13:43.562 "auth": { 00:13:43.562 "state": "completed", 00:13:43.562 "digest": "sha256", 00:13:43.562 "dhgroup": "ffdhe6144" 00:13:43.562 } 00:13:43.562 } 00:13:43.562 ]' 00:13:43.562 10:42:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:43.820 10:42:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:43.821 10:42:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:43.821 10:42:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:43.821 10:42:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:43.821 10:42:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:43.821 10:42:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:43.821 10:42:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:44.079 10:42:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjU5YWE3OWMyNGYxMDg2YjNlYzM3OGYxODIwMjJiNmKZn+Vz: --dhchap-ctrl-secret DHHC-1:02:NWU0MThjMzc4NjM5MDc3MjllMThkM2EwMTg2YjVmZWQxMTRmNGVhMjhkOGY1NzE0Y8QjJw==: 00:13:44.080 10:42:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NjU5YWE3OWMyNGYxMDg2YjNlYzM3OGYxODIwMjJiNmKZn+Vz: --dhchap-ctrl-secret DHHC-1:02:NWU0MThjMzc4NjM5MDc3MjllMThkM2EwMTg2YjVmZWQxMTRmNGVhMjhkOGY1NzE0Y8QjJw==: 00:13:44.647 10:42:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:44.647 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:44.647 10:42:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:44.647 10:42:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.647 10:42:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.647 10:42:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.647 10:42:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:44.647 10:42:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:44.647 10:42:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:44.906 10:42:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:13:44.906 10:42:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:44.906 10:42:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:44.906 10:42:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:44.906 10:42:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:44.906 10:42:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:44.906 10:42:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:44.906 10:42:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.906 10:42:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.906 10:42:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.906 10:42:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:44.906 10:42:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:44.907 10:42:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:45.165 00:13:45.165 10:42:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:45.165 10:42:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:45.165 10:42:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:45.424 10:42:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:45.424 10:42:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:45.424 10:42:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.424 10:42:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.425 10:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.425 10:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:45.425 { 00:13:45.425 "cntlid": 37, 00:13:45.425 "qid": 0, 00:13:45.425 "state": "enabled", 00:13:45.425 "thread": "nvmf_tgt_poll_group_000", 00:13:45.425 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:13:45.425 "listen_address": { 00:13:45.425 "trtype": "RDMA", 00:13:45.425 "adrfam": "IPv4", 00:13:45.425 "traddr": "192.168.100.8", 00:13:45.425 "trsvcid": "4420" 00:13:45.425 }, 00:13:45.425 "peer_address": { 00:13:45.425 "trtype": "RDMA", 00:13:45.425 "adrfam": "IPv4", 00:13:45.425 "traddr": "192.168.100.8", 00:13:45.425 "trsvcid": "58011" 00:13:45.425 }, 00:13:45.425 "auth": { 00:13:45.425 "state": "completed", 00:13:45.425 "digest": "sha256", 00:13:45.425 "dhgroup": "ffdhe6144" 00:13:45.425 } 00:13:45.425 } 00:13:45.425 ]' 00:13:45.425 10:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:45.425 10:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:45.425 10:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:45.425 10:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:45.684 10:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:45.684 10:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:45.684 10:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:45.684 10:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:45.684 10:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2NhZmY1Nzc4Yzc4Yzk5MTIxYWMwYTNmYTA4ZTUxNzkyNzY0NDk4NDM3MTNhNGUxvR/3DQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjOGVlZTlhMGJiY2M4YTA4NTcxYzEyNDFlNmUxZjDW2pwW: 00:13:45.684 10:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Y2NhZmY1Nzc4Yzc4Yzk5MTIxYWMwYTNmYTA4ZTUxNzkyNzY0NDk4NDM3MTNhNGUxvR/3DQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjOGVlZTlhMGJiY2M4YTA4NTcxYzEyNDFlNmUxZjDW2pwW: 00:13:46.620 10:42:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:46.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:46.620 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:46.620 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.620 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.620 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.620 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:46.620 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:46.620 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:46.620 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:13:46.620 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:46.620 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:46.620 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:46.620 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:46.620 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:46.620 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:13:46.621 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.621 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.879 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.879 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:46.879 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:46.879 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:47.138 00:13:47.138 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:47.138 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:47.138 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:47.397 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:47.397 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:47.397 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.397 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.397 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.397 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:47.397 { 00:13:47.397 "cntlid": 39, 00:13:47.397 "qid": 0, 00:13:47.397 "state": "enabled", 00:13:47.397 "thread": "nvmf_tgt_poll_group_000", 00:13:47.397 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:13:47.397 "listen_address": { 00:13:47.397 "trtype": "RDMA", 00:13:47.397 "adrfam": "IPv4", 00:13:47.397 "traddr": "192.168.100.8", 00:13:47.397 "trsvcid": "4420" 00:13:47.397 }, 00:13:47.397 "peer_address": { 00:13:47.397 "trtype": "RDMA", 00:13:47.397 "adrfam": "IPv4", 00:13:47.397 "traddr": "192.168.100.8", 00:13:47.397 "trsvcid": "60976" 00:13:47.397 }, 00:13:47.397 "auth": { 00:13:47.397 "state": "completed", 00:13:47.397 "digest": "sha256", 00:13:47.397 "dhgroup": "ffdhe6144" 00:13:47.397 } 00:13:47.397 } 00:13:47.397 ]' 00:13:47.397 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:47.397 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:47.397 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:47.397 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:47.397 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:47.397 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:47.397 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:47.397 10:42:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:47.656 10:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzg2Y2M4ZmI1NDEyZDU5NzE2YzllZjYyMzVmNzA0YzQxZWQ4Y2IwMzkwMGNiNzEzMWY0NDVkNGQyZGYxYWYxZQqRKoI=: 00:13:47.656 10:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:Nzg2Y2M4ZmI1NDEyZDU5NzE2YzllZjYyMzVmNzA0YzQxZWQ4Y2IwMzkwMGNiNzEzMWY0NDVkNGQyZGYxYWYxZQqRKoI=: 00:13:48.223 10:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:48.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:48.482 10:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:48.482 10:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.482 10:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.482 10:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.482 10:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:48.482 10:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:48.482 10:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:48.482 10:42:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:48.482 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:13:48.482 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:48.482 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:48.482 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:48.482 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:48.482 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:48.482 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:48.482 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.482 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.482 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.482 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:48.482 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:48.482 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:49.050 00:13:49.050 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:49.050 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:49.050 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:49.309 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:49.309 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:49.309 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.309 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.309 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.309 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:49.309 { 00:13:49.309 "cntlid": 41, 00:13:49.309 "qid": 0, 00:13:49.309 "state": "enabled", 00:13:49.309 "thread": "nvmf_tgt_poll_group_000", 00:13:49.309 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:13:49.309 "listen_address": { 00:13:49.309 "trtype": "RDMA", 00:13:49.309 "adrfam": "IPv4", 00:13:49.309 "traddr": "192.168.100.8", 00:13:49.309 "trsvcid": "4420" 00:13:49.309 }, 00:13:49.309 "peer_address": { 00:13:49.309 "trtype": "RDMA", 00:13:49.309 "adrfam": "IPv4", 00:13:49.309 "traddr": "192.168.100.8", 00:13:49.309 "trsvcid": "55868" 00:13:49.309 }, 00:13:49.309 "auth": { 00:13:49.309 "state": "completed", 00:13:49.309 "digest": "sha256", 00:13:49.309 "dhgroup": "ffdhe8192" 00:13:49.309 } 00:13:49.309 } 00:13:49.309 ]' 00:13:49.309 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:49.309 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:49.309 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:49.309 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:49.309 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:49.309 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:49.309 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:49.309 10:42:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:49.568 10:42:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBkNDIwMDJlNmM2YjRkOGMzZGZiOTc4NGNjZDc3ZDY5MWI4MWM2ZDQ4ZjhkOWZleMhgjw==: --dhchap-ctrl-secret DHHC-1:03:YjJhM2RjMjJhZTI3MDZlZjFjZjE0MmIzYTg3NzJiMDA0NDcyYjI0MTg4N2UwYzkwZWY3ZDMzMTA1NTY2YmQ2ZmCD1ME=: 00:13:49.568 10:42:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MjBkNDIwMDJlNmM2YjRkOGMzZGZiOTc4NGNjZDc3ZDY5MWI4MWM2ZDQ4ZjhkOWZleMhgjw==: --dhchap-ctrl-secret DHHC-1:03:YjJhM2RjMjJhZTI3MDZlZjFjZjE0MmIzYTg3NzJiMDA0NDcyYjI0MTg4N2UwYzkwZWY3ZDMzMTA1NTY2YmQ2ZmCD1ME=: 00:13:50.136 10:42:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:50.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:50.396 10:42:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:50.396 10:42:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.396 10:42:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.396 10:42:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.396 10:42:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:50.396 10:42:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:50.396 10:42:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:50.396 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:13:50.396 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:50.396 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:50.396 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:50.396 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:50.396 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:50.396 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:50.396 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.396 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.396 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.396 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:50.396 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:50.396 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:50.965 00:13:50.965 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:50.965 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:50.965 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:51.224 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:51.224 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:51.224 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.224 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.224 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.224 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:51.224 { 00:13:51.224 "cntlid": 43, 00:13:51.224 "qid": 0, 00:13:51.224 "state": "enabled", 00:13:51.224 "thread": "nvmf_tgt_poll_group_000", 00:13:51.224 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:13:51.224 "listen_address": { 00:13:51.224 "trtype": "RDMA", 00:13:51.224 "adrfam": "IPv4", 00:13:51.224 "traddr": "192.168.100.8", 00:13:51.224 "trsvcid": "4420" 00:13:51.224 }, 00:13:51.224 "peer_address": { 00:13:51.224 "trtype": "RDMA", 00:13:51.224 "adrfam": "IPv4", 00:13:51.224 "traddr": "192.168.100.8", 00:13:51.224 "trsvcid": "54880" 00:13:51.224 }, 00:13:51.224 "auth": { 00:13:51.224 "state": "completed", 00:13:51.224 "digest": "sha256", 00:13:51.224 "dhgroup": "ffdhe8192" 00:13:51.224 } 00:13:51.224 } 00:13:51.224 ]' 00:13:51.224 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:51.224 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:51.224 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:51.224 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:51.224 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:51.224 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:51.224 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:51.224 10:42:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:51.483 10:42:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjU5YWE3OWMyNGYxMDg2YjNlYzM3OGYxODIwMjJiNmKZn+Vz: --dhchap-ctrl-secret DHHC-1:02:NWU0MThjMzc4NjM5MDc3MjllMThkM2EwMTg2YjVmZWQxMTRmNGVhMjhkOGY1NzE0Y8QjJw==: 00:13:51.483 10:42:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NjU5YWE3OWMyNGYxMDg2YjNlYzM3OGYxODIwMjJiNmKZn+Vz: --dhchap-ctrl-secret DHHC-1:02:NWU0MThjMzc4NjM5MDc3MjllMThkM2EwMTg2YjVmZWQxMTRmNGVhMjhkOGY1NzE0Y8QjJw==: 00:13:52.051 10:42:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:52.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:52.310 10:42:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:52.310 10:42:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.310 10:42:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.310 10:42:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.310 10:42:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:52.310 10:42:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:52.310 10:42:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:52.570 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:13:52.570 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:52.570 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:52.570 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:52.570 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:52.570 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:52.570 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:52.570 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.570 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.570 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.570 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:52.570 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:52.570 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:53.136 00:13:53.136 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:53.136 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:53.136 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:53.136 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:53.136 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:53.136 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.136 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.136 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.136 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:53.136 { 00:13:53.136 "cntlid": 45, 00:13:53.136 "qid": 0, 00:13:53.136 "state": "enabled", 00:13:53.136 "thread": "nvmf_tgt_poll_group_000", 00:13:53.136 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:13:53.136 "listen_address": { 00:13:53.136 "trtype": "RDMA", 00:13:53.136 "adrfam": "IPv4", 00:13:53.136 "traddr": "192.168.100.8", 00:13:53.136 "trsvcid": "4420" 00:13:53.136 }, 00:13:53.136 "peer_address": { 00:13:53.136 "trtype": "RDMA", 00:13:53.136 "adrfam": "IPv4", 00:13:53.136 "traddr": "192.168.100.8", 00:13:53.136 "trsvcid": "54467" 00:13:53.136 }, 00:13:53.136 "auth": { 00:13:53.136 "state": "completed", 00:13:53.136 "digest": "sha256", 00:13:53.136 "dhgroup": "ffdhe8192" 00:13:53.136 } 00:13:53.136 } 00:13:53.136 ]' 00:13:53.136 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:53.136 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:53.136 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:53.396 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:53.396 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:53.396 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:53.396 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:53.396 10:42:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:53.655 10:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2NhZmY1Nzc4Yzc4Yzk5MTIxYWMwYTNmYTA4ZTUxNzkyNzY0NDk4NDM3MTNhNGUxvR/3DQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjOGVlZTlhMGJiY2M4YTA4NTcxYzEyNDFlNmUxZjDW2pwW: 00:13:53.655 10:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Y2NhZmY1Nzc4Yzc4Yzk5MTIxYWMwYTNmYTA4ZTUxNzkyNzY0NDk4NDM3MTNhNGUxvR/3DQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjOGVlZTlhMGJiY2M4YTA4NTcxYzEyNDFlNmUxZjDW2pwW: 00:13:54.223 10:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:54.223 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:54.223 10:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:54.223 10:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.223 10:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.223 10:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.223 10:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:54.224 10:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:54.224 10:42:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:54.483 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:13:54.483 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:54.483 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:54.483 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:54.483 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:54.483 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:54.483 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:13:54.483 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.483 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.483 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.483 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:54.483 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:54.483 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:55.051 00:13:55.051 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:55.051 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:55.051 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:55.051 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.051 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:55.051 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.051 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.051 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.051 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:55.051 { 00:13:55.051 "cntlid": 47, 00:13:55.051 "qid": 0, 00:13:55.051 "state": "enabled", 00:13:55.052 "thread": "nvmf_tgt_poll_group_000", 00:13:55.052 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:13:55.052 "listen_address": { 00:13:55.052 "trtype": "RDMA", 00:13:55.052 "adrfam": "IPv4", 00:13:55.052 "traddr": "192.168.100.8", 00:13:55.052 "trsvcid": "4420" 00:13:55.052 }, 00:13:55.052 "peer_address": { 00:13:55.052 "trtype": "RDMA", 00:13:55.052 "adrfam": "IPv4", 00:13:55.052 "traddr": "192.168.100.8", 00:13:55.052 "trsvcid": "59197" 00:13:55.052 }, 00:13:55.052 "auth": { 00:13:55.052 "state": "completed", 00:13:55.052 "digest": "sha256", 00:13:55.052 "dhgroup": "ffdhe8192" 00:13:55.052 } 00:13:55.052 } 00:13:55.052 ]' 00:13:55.052 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:55.311 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:55.311 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:55.311 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:55.311 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:55.311 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:55.311 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:55.311 10:42:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:55.570 10:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzg2Y2M4ZmI1NDEyZDU5NzE2YzllZjYyMzVmNzA0YzQxZWQ4Y2IwMzkwMGNiNzEzMWY0NDVkNGQyZGYxYWYxZQqRKoI=: 00:13:55.570 10:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:Nzg2Y2M4ZmI1NDEyZDU5NzE2YzllZjYyMzVmNzA0YzQxZWQ4Y2IwMzkwMGNiNzEzMWY0NDVkNGQyZGYxYWYxZQqRKoI=: 00:13:56.137 10:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:56.137 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:56.137 10:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:56.137 10:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.137 10:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.137 10:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.137 10:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:13:56.137 10:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:56.138 10:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:56.138 10:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:56.138 10:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:56.396 10:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:13:56.396 10:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:56.396 10:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:56.396 10:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:56.396 10:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:56.396 10:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:56.396 10:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.396 10:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.396 10:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.396 10:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.396 10:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.396 10:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.396 10:42:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.656 00:13:56.656 10:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:56.656 10:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:56.656 10:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:56.915 10:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:56.915 10:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:56.915 10:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.915 10:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.915 10:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.915 10:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:56.915 { 00:13:56.915 "cntlid": 49, 00:13:56.915 "qid": 0, 00:13:56.915 "state": "enabled", 00:13:56.915 "thread": "nvmf_tgt_poll_group_000", 00:13:56.915 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:13:56.915 "listen_address": { 00:13:56.915 "trtype": "RDMA", 00:13:56.915 "adrfam": "IPv4", 00:13:56.915 "traddr": "192.168.100.8", 00:13:56.915 "trsvcid": "4420" 00:13:56.915 }, 00:13:56.915 "peer_address": { 00:13:56.915 "trtype": "RDMA", 00:13:56.915 "adrfam": "IPv4", 00:13:56.915 "traddr": "192.168.100.8", 00:13:56.915 "trsvcid": "56691" 00:13:56.915 }, 00:13:56.915 "auth": { 00:13:56.915 "state": "completed", 00:13:56.915 "digest": "sha384", 00:13:56.915 "dhgroup": "null" 00:13:56.915 } 00:13:56.915 } 00:13:56.915 ]' 00:13:56.915 10:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:56.915 10:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:56.915 10:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:56.915 10:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:56.915 10:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:56.915 10:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:56.915 10:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:56.915 10:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:57.174 10:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBkNDIwMDJlNmM2YjRkOGMzZGZiOTc4NGNjZDc3ZDY5MWI4MWM2ZDQ4ZjhkOWZleMhgjw==: --dhchap-ctrl-secret DHHC-1:03:YjJhM2RjMjJhZTI3MDZlZjFjZjE0MmIzYTg3NzJiMDA0NDcyYjI0MTg4N2UwYzkwZWY3ZDMzMTA1NTY2YmQ2ZmCD1ME=: 00:13:57.174 10:42:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MjBkNDIwMDJlNmM2YjRkOGMzZGZiOTc4NGNjZDc3ZDY5MWI4MWM2ZDQ4ZjhkOWZleMhgjw==: --dhchap-ctrl-secret DHHC-1:03:YjJhM2RjMjJhZTI3MDZlZjFjZjE0MmIzYTg3NzJiMDA0NDcyYjI0MTg4N2UwYzkwZWY3ZDMzMTA1NTY2YmQ2ZmCD1ME=: 00:13:57.742 10:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:58.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:58.001 10:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:58.001 10:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.001 10:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.001 10:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.001 10:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:58.001 10:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:58.001 10:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:58.261 10:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:13:58.261 10:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:58.261 10:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:58.261 10:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:58.261 10:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:58.261 10:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:58.261 10:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:58.261 10:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.261 10:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.261 10:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.261 10:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:58.261 10:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:58.261 10:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:58.520 00:13:58.520 10:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:58.520 10:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:58.520 10:42:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:58.520 10:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:58.520 10:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:58.520 10:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.520 10:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.779 10:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.779 10:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:58.779 { 00:13:58.779 "cntlid": 51, 00:13:58.779 "qid": 0, 00:13:58.779 "state": "enabled", 00:13:58.779 "thread": "nvmf_tgt_poll_group_000", 00:13:58.779 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:13:58.779 "listen_address": { 00:13:58.779 "trtype": "RDMA", 00:13:58.779 "adrfam": "IPv4", 00:13:58.779 "traddr": "192.168.100.8", 00:13:58.779 "trsvcid": "4420" 00:13:58.779 }, 00:13:58.779 "peer_address": { 00:13:58.779 "trtype": "RDMA", 00:13:58.779 "adrfam": "IPv4", 00:13:58.779 "traddr": "192.168.100.8", 00:13:58.779 "trsvcid": "39976" 00:13:58.779 }, 00:13:58.779 "auth": { 00:13:58.779 "state": "completed", 00:13:58.779 "digest": "sha384", 00:13:58.779 "dhgroup": "null" 00:13:58.779 } 00:13:58.779 } 00:13:58.779 ]' 00:13:58.780 10:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:58.780 10:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:58.780 10:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:58.780 10:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:58.780 10:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:58.780 10:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:58.780 10:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:58.780 10:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:59.039 10:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjU5YWE3OWMyNGYxMDg2YjNlYzM3OGYxODIwMjJiNmKZn+Vz: --dhchap-ctrl-secret DHHC-1:02:NWU0MThjMzc4NjM5MDc3MjllMThkM2EwMTg2YjVmZWQxMTRmNGVhMjhkOGY1NzE0Y8QjJw==: 00:13:59.039 10:42:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NjU5YWE3OWMyNGYxMDg2YjNlYzM3OGYxODIwMjJiNmKZn+Vz: --dhchap-ctrl-secret DHHC-1:02:NWU0MThjMzc4NjM5MDc3MjllMThkM2EwMTg2YjVmZWQxMTRmNGVhMjhkOGY1NzE0Y8QjJw==: 00:13:59.607 10:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:59.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:59.607 10:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:59.607 10:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.607 10:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.607 10:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.607 10:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:59.607 10:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:59.607 10:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:59.867 10:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:13:59.867 10:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:59.867 10:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:59.867 10:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:59.867 10:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:59.867 10:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:59.867 10:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:59.867 10:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.867 10:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.867 10:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.867 10:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:59.867 10:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:59.867 10:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:00.126 00:14:00.126 10:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:00.126 10:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:00.126 10:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:00.385 10:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:00.385 10:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:00.385 10:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.385 10:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.385 10:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.385 10:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:00.385 { 00:14:00.385 "cntlid": 53, 00:14:00.385 "qid": 0, 00:14:00.385 "state": "enabled", 00:14:00.385 "thread": "nvmf_tgt_poll_group_000", 00:14:00.385 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:00.385 "listen_address": { 00:14:00.385 "trtype": "RDMA", 00:14:00.385 "adrfam": "IPv4", 00:14:00.385 "traddr": "192.168.100.8", 00:14:00.385 "trsvcid": "4420" 00:14:00.385 }, 00:14:00.385 "peer_address": { 00:14:00.385 "trtype": "RDMA", 00:14:00.385 "adrfam": "IPv4", 00:14:00.385 "traddr": "192.168.100.8", 00:14:00.385 "trsvcid": "35040" 00:14:00.385 }, 00:14:00.385 "auth": { 00:14:00.385 "state": "completed", 00:14:00.385 "digest": "sha384", 00:14:00.385 "dhgroup": "null" 00:14:00.385 } 00:14:00.385 } 00:14:00.385 ]' 00:14:00.386 10:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:00.386 10:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:00.386 10:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:00.386 10:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:00.386 10:42:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:00.386 10:42:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:00.386 10:42:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:00.386 10:42:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:00.648 10:42:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2NhZmY1Nzc4Yzc4Yzk5MTIxYWMwYTNmYTA4ZTUxNzkyNzY0NDk4NDM3MTNhNGUxvR/3DQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjOGVlZTlhMGJiY2M4YTA4NTcxYzEyNDFlNmUxZjDW2pwW: 00:14:00.648 10:42:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Y2NhZmY1Nzc4Yzc4Yzk5MTIxYWMwYTNmYTA4ZTUxNzkyNzY0NDk4NDM3MTNhNGUxvR/3DQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjOGVlZTlhMGJiY2M4YTA4NTcxYzEyNDFlNmUxZjDW2pwW: 00:14:01.216 10:42:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:01.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:01.475 10:42:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:01.475 10:42:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.475 10:42:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.475 10:42:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.475 10:42:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:01.475 10:42:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:01.475 10:42:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:01.475 10:42:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:14:01.475 10:42:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:01.475 10:42:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:01.475 10:42:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:01.475 10:42:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:01.475 10:42:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:01.475 10:42:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:14:01.475 10:42:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.475 10:42:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.735 10:42:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.735 10:42:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:01.735 10:42:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:01.735 10:42:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:01.735 00:14:01.735 10:42:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:01.735 10:42:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:01.735 10:42:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.994 10:42:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:01.994 10:42:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:01.994 10:42:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.994 10:42:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.994 10:42:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.994 10:42:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:01.994 { 00:14:01.994 "cntlid": 55, 00:14:01.994 "qid": 0, 00:14:01.994 "state": "enabled", 00:14:01.994 "thread": "nvmf_tgt_poll_group_000", 00:14:01.994 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:01.994 "listen_address": { 00:14:01.994 "trtype": "RDMA", 00:14:01.994 "adrfam": "IPv4", 00:14:01.994 "traddr": "192.168.100.8", 00:14:01.994 "trsvcid": "4420" 00:14:01.994 }, 00:14:01.994 "peer_address": { 00:14:01.994 "trtype": "RDMA", 00:14:01.994 "adrfam": "IPv4", 00:14:01.994 "traddr": "192.168.100.8", 00:14:01.994 "trsvcid": "43655" 00:14:01.994 }, 00:14:01.994 "auth": { 00:14:01.994 "state": "completed", 00:14:01.994 "digest": "sha384", 00:14:01.994 "dhgroup": "null" 00:14:01.994 } 00:14:01.994 } 00:14:01.994 ]' 00:14:01.994 10:42:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:01.994 10:42:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:01.994 10:42:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:02.252 10:42:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:02.252 10:42:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:02.252 10:42:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:02.252 10:42:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:02.252 10:42:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:02.252 10:42:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzg2Y2M4ZmI1NDEyZDU5NzE2YzllZjYyMzVmNzA0YzQxZWQ4Y2IwMzkwMGNiNzEzMWY0NDVkNGQyZGYxYWYxZQqRKoI=: 00:14:02.252 10:42:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:Nzg2Y2M4ZmI1NDEyZDU5NzE2YzllZjYyMzVmNzA0YzQxZWQ4Y2IwMzkwMGNiNzEzMWY0NDVkNGQyZGYxYWYxZQqRKoI=: 00:14:03.188 10:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:03.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:03.188 10:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:03.188 10:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.188 10:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.188 10:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.188 10:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:03.188 10:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:03.188 10:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:03.188 10:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:03.447 10:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:14:03.447 10:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:03.447 10:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:03.447 10:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:03.447 10:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:03.447 10:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:03.447 10:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:03.447 10:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.447 10:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.447 10:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.447 10:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:03.447 10:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:03.447 10:42:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:03.447 00:14:03.706 10:42:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:03.706 10:42:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:03.707 10:42:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:03.707 10:42:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:03.707 10:42:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:03.707 10:42:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.707 10:42:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.707 10:42:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.707 10:42:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:03.707 { 00:14:03.707 "cntlid": 57, 00:14:03.707 "qid": 0, 00:14:03.707 "state": "enabled", 00:14:03.707 "thread": "nvmf_tgt_poll_group_000", 00:14:03.707 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:03.707 "listen_address": { 00:14:03.707 "trtype": "RDMA", 00:14:03.707 "adrfam": "IPv4", 00:14:03.707 "traddr": "192.168.100.8", 00:14:03.707 "trsvcid": "4420" 00:14:03.707 }, 00:14:03.707 "peer_address": { 00:14:03.707 "trtype": "RDMA", 00:14:03.707 "adrfam": "IPv4", 00:14:03.707 "traddr": "192.168.100.8", 00:14:03.707 "trsvcid": "36378" 00:14:03.707 }, 00:14:03.707 "auth": { 00:14:03.707 "state": "completed", 00:14:03.707 "digest": "sha384", 00:14:03.707 "dhgroup": "ffdhe2048" 00:14:03.707 } 00:14:03.707 } 00:14:03.707 ]' 00:14:03.707 10:42:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:03.965 10:42:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:03.965 10:42:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:03.965 10:42:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:03.965 10:42:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:03.965 10:42:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:03.965 10:42:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:03.965 10:42:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:04.224 10:42:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBkNDIwMDJlNmM2YjRkOGMzZGZiOTc4NGNjZDc3ZDY5MWI4MWM2ZDQ4ZjhkOWZleMhgjw==: --dhchap-ctrl-secret DHHC-1:03:YjJhM2RjMjJhZTI3MDZlZjFjZjE0MmIzYTg3NzJiMDA0NDcyYjI0MTg4N2UwYzkwZWY3ZDMzMTA1NTY2YmQ2ZmCD1ME=: 00:14:04.224 10:42:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MjBkNDIwMDJlNmM2YjRkOGMzZGZiOTc4NGNjZDc3ZDY5MWI4MWM2ZDQ4ZjhkOWZleMhgjw==: --dhchap-ctrl-secret DHHC-1:03:YjJhM2RjMjJhZTI3MDZlZjFjZjE0MmIzYTg3NzJiMDA0NDcyYjI0MTg4N2UwYzkwZWY3ZDMzMTA1NTY2YmQ2ZmCD1ME=: 00:14:04.792 10:42:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:04.792 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:04.793 10:42:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:04.793 10:42:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.793 10:42:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.793 10:42:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.793 10:42:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:04.793 10:42:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:04.793 10:42:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:05.052 10:42:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:14:05.052 10:42:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:05.052 10:42:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:05.052 10:42:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:05.052 10:42:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:05.052 10:42:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:05.052 10:42:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.052 10:42:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.052 10:42:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.052 10:42:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.052 10:42:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.052 10:42:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.052 10:42:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.311 00:14:05.311 10:42:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:05.311 10:42:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:05.311 10:42:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:05.571 10:42:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:05.571 10:42:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:05.571 10:42:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.571 10:42:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.571 10:42:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.571 10:42:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:05.571 { 00:14:05.571 "cntlid": 59, 00:14:05.571 "qid": 0, 00:14:05.571 "state": "enabled", 00:14:05.571 "thread": "nvmf_tgt_poll_group_000", 00:14:05.571 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:05.571 "listen_address": { 00:14:05.571 "trtype": "RDMA", 00:14:05.571 "adrfam": "IPv4", 00:14:05.571 "traddr": "192.168.100.8", 00:14:05.571 "trsvcid": "4420" 00:14:05.571 }, 00:14:05.571 "peer_address": { 00:14:05.571 "trtype": "RDMA", 00:14:05.571 "adrfam": "IPv4", 00:14:05.571 "traddr": "192.168.100.8", 00:14:05.571 "trsvcid": "47995" 00:14:05.571 }, 00:14:05.571 "auth": { 00:14:05.571 "state": "completed", 00:14:05.571 "digest": "sha384", 00:14:05.571 "dhgroup": "ffdhe2048" 00:14:05.571 } 00:14:05.571 } 00:14:05.571 ]' 00:14:05.571 10:42:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:05.571 10:42:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:05.571 10:42:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:05.571 10:42:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:05.571 10:42:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:05.571 10:42:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:05.571 10:42:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:05.571 10:42:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:05.830 10:42:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjU5YWE3OWMyNGYxMDg2YjNlYzM3OGYxODIwMjJiNmKZn+Vz: --dhchap-ctrl-secret DHHC-1:02:NWU0MThjMzc4NjM5MDc3MjllMThkM2EwMTg2YjVmZWQxMTRmNGVhMjhkOGY1NzE0Y8QjJw==: 00:14:05.830 10:42:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NjU5YWE3OWMyNGYxMDg2YjNlYzM3OGYxODIwMjJiNmKZn+Vz: --dhchap-ctrl-secret DHHC-1:02:NWU0MThjMzc4NjM5MDc3MjllMThkM2EwMTg2YjVmZWQxMTRmNGVhMjhkOGY1NzE0Y8QjJw==: 00:14:06.399 10:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:06.658 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:06.658 10:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:06.658 10:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.658 10:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.658 10:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.658 10:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:06.658 10:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:06.658 10:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:06.658 10:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:14:06.658 10:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:06.658 10:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:06.918 10:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:06.918 10:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:06.918 10:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:06.918 10:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:06.918 10:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.918 10:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.918 10:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.918 10:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:06.918 10:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:06.918 10:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:06.918 00:14:07.177 10:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:07.177 10:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:07.177 10:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:07.177 10:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:07.177 10:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:07.177 10:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.177 10:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.177 10:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.178 10:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:07.178 { 00:14:07.178 "cntlid": 61, 00:14:07.178 "qid": 0, 00:14:07.178 "state": "enabled", 00:14:07.178 "thread": "nvmf_tgt_poll_group_000", 00:14:07.178 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:07.178 "listen_address": { 00:14:07.178 "trtype": "RDMA", 00:14:07.178 "adrfam": "IPv4", 00:14:07.178 "traddr": "192.168.100.8", 00:14:07.178 "trsvcid": "4420" 00:14:07.178 }, 00:14:07.178 "peer_address": { 00:14:07.178 "trtype": "RDMA", 00:14:07.178 "adrfam": "IPv4", 00:14:07.178 "traddr": "192.168.100.8", 00:14:07.178 "trsvcid": "33418" 00:14:07.178 }, 00:14:07.178 "auth": { 00:14:07.178 "state": "completed", 00:14:07.178 "digest": "sha384", 00:14:07.178 "dhgroup": "ffdhe2048" 00:14:07.178 } 00:14:07.178 } 00:14:07.178 ]' 00:14:07.178 10:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:07.178 10:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:07.437 10:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:07.437 10:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:07.437 10:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:07.437 10:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:07.437 10:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:07.437 10:42:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:07.696 10:42:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2NhZmY1Nzc4Yzc4Yzk5MTIxYWMwYTNmYTA4ZTUxNzkyNzY0NDk4NDM3MTNhNGUxvR/3DQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjOGVlZTlhMGJiY2M4YTA4NTcxYzEyNDFlNmUxZjDW2pwW: 00:14:07.697 10:42:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Y2NhZmY1Nzc4Yzc4Yzk5MTIxYWMwYTNmYTA4ZTUxNzkyNzY0NDk4NDM3MTNhNGUxvR/3DQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjOGVlZTlhMGJiY2M4YTA4NTcxYzEyNDFlNmUxZjDW2pwW: 00:14:08.265 10:42:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:08.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:08.265 10:42:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:08.265 10:42:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.265 10:42:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.265 10:42:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.265 10:42:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:08.265 10:42:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:08.265 10:42:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:08.523 10:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:14:08.523 10:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:08.523 10:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:08.523 10:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:08.523 10:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:08.523 10:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:08.523 10:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:14:08.523 10:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.523 10:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.523 10:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.523 10:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:08.523 10:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:08.523 10:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:08.782 00:14:08.782 10:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:08.782 10:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:08.782 10:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:09.041 10:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:09.041 10:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:09.041 10:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.041 10:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.041 10:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.041 10:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:09.041 { 00:14:09.041 "cntlid": 63, 00:14:09.041 "qid": 0, 00:14:09.041 "state": "enabled", 00:14:09.041 "thread": "nvmf_tgt_poll_group_000", 00:14:09.041 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:09.041 "listen_address": { 00:14:09.041 "trtype": "RDMA", 00:14:09.041 "adrfam": "IPv4", 00:14:09.041 "traddr": "192.168.100.8", 00:14:09.041 "trsvcid": "4420" 00:14:09.041 }, 00:14:09.041 "peer_address": { 00:14:09.041 "trtype": "RDMA", 00:14:09.041 "adrfam": "IPv4", 00:14:09.041 "traddr": "192.168.100.8", 00:14:09.041 "trsvcid": "48594" 00:14:09.041 }, 00:14:09.041 "auth": { 00:14:09.041 "state": "completed", 00:14:09.041 "digest": "sha384", 00:14:09.041 "dhgroup": "ffdhe2048" 00:14:09.041 } 00:14:09.041 } 00:14:09.041 ]' 00:14:09.041 10:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:09.041 10:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:09.041 10:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:09.041 10:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:09.041 10:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:09.041 10:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:09.041 10:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:09.041 10:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:09.300 10:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzg2Y2M4ZmI1NDEyZDU5NzE2YzllZjYyMzVmNzA0YzQxZWQ4Y2IwMzkwMGNiNzEzMWY0NDVkNGQyZGYxYWYxZQqRKoI=: 00:14:09.300 10:42:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:Nzg2Y2M4ZmI1NDEyZDU5NzE2YzllZjYyMzVmNzA0YzQxZWQ4Y2IwMzkwMGNiNzEzMWY0NDVkNGQyZGYxYWYxZQqRKoI=: 00:14:09.869 10:42:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:10.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:10.133 10:42:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:10.133 10:42:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.133 10:42:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.133 10:42:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.133 10:42:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:10.133 10:42:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:10.133 10:42:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:10.133 10:42:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:10.133 10:42:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:14:10.133 10:42:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:10.133 10:42:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:10.133 10:42:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:10.133 10:42:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:10.133 10:42:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:10.133 10:42:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.133 10:42:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.133 10:42:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.133 10:42:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.133 10:42:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.133 10:42:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.133 10:42:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.443 00:14:10.443 10:42:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:10.443 10:42:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:10.443 10:42:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:10.743 10:42:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:10.743 10:42:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:10.743 10:42:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.743 10:42:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.743 10:42:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.743 10:42:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:10.743 { 00:14:10.743 "cntlid": 65, 00:14:10.743 "qid": 0, 00:14:10.743 "state": "enabled", 00:14:10.743 "thread": "nvmf_tgt_poll_group_000", 00:14:10.743 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:10.743 "listen_address": { 00:14:10.744 "trtype": "RDMA", 00:14:10.744 "adrfam": "IPv4", 00:14:10.744 "traddr": "192.168.100.8", 00:14:10.744 "trsvcid": "4420" 00:14:10.744 }, 00:14:10.744 "peer_address": { 00:14:10.744 "trtype": "RDMA", 00:14:10.744 "adrfam": "IPv4", 00:14:10.744 "traddr": "192.168.100.8", 00:14:10.744 "trsvcid": "42514" 00:14:10.744 }, 00:14:10.744 "auth": { 00:14:10.744 "state": "completed", 00:14:10.744 "digest": "sha384", 00:14:10.744 "dhgroup": "ffdhe3072" 00:14:10.744 } 00:14:10.744 } 00:14:10.744 ]' 00:14:10.744 10:42:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:10.744 10:42:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:10.744 10:42:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:10.744 10:42:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:10.744 10:42:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:11.012 10:42:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:11.012 10:42:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:11.012 10:42:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:11.012 10:42:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBkNDIwMDJlNmM2YjRkOGMzZGZiOTc4NGNjZDc3ZDY5MWI4MWM2ZDQ4ZjhkOWZleMhgjw==: --dhchap-ctrl-secret DHHC-1:03:YjJhM2RjMjJhZTI3MDZlZjFjZjE0MmIzYTg3NzJiMDA0NDcyYjI0MTg4N2UwYzkwZWY3ZDMzMTA1NTY2YmQ2ZmCD1ME=: 00:14:11.012 10:42:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MjBkNDIwMDJlNmM2YjRkOGMzZGZiOTc4NGNjZDc3ZDY5MWI4MWM2ZDQ4ZjhkOWZleMhgjw==: --dhchap-ctrl-secret DHHC-1:03:YjJhM2RjMjJhZTI3MDZlZjFjZjE0MmIzYTg3NzJiMDA0NDcyYjI0MTg4N2UwYzkwZWY3ZDMzMTA1NTY2YmQ2ZmCD1ME=: 00:14:11.580 10:42:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:11.839 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:11.839 10:42:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:11.839 10:42:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.839 10:42:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.839 10:42:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.839 10:42:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:11.839 10:42:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:11.839 10:42:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:12.101 10:42:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:14:12.102 10:42:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:12.102 10:42:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:12.102 10:42:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:12.102 10:42:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:12.102 10:42:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:12.102 10:42:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.102 10:42:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.102 10:42:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.102 10:42:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.102 10:42:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.102 10:42:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.103 10:42:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.367 00:14:12.367 10:42:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:12.367 10:42:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:12.367 10:42:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:12.367 10:42:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:12.367 10:42:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:12.367 10:42:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.367 10:42:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.367 10:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.367 10:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:12.367 { 00:14:12.367 "cntlid": 67, 00:14:12.367 "qid": 0, 00:14:12.367 "state": "enabled", 00:14:12.367 "thread": "nvmf_tgt_poll_group_000", 00:14:12.367 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:12.367 "listen_address": { 00:14:12.367 "trtype": "RDMA", 00:14:12.367 "adrfam": "IPv4", 00:14:12.367 "traddr": "192.168.100.8", 00:14:12.367 "trsvcid": "4420" 00:14:12.367 }, 00:14:12.367 "peer_address": { 00:14:12.367 "trtype": "RDMA", 00:14:12.367 "adrfam": "IPv4", 00:14:12.367 "traddr": "192.168.100.8", 00:14:12.367 "trsvcid": "54840" 00:14:12.367 }, 00:14:12.367 "auth": { 00:14:12.367 "state": "completed", 00:14:12.367 "digest": "sha384", 00:14:12.367 "dhgroup": "ffdhe3072" 00:14:12.367 } 00:14:12.367 } 00:14:12.367 ]' 00:14:12.367 10:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:12.625 10:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:12.625 10:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:12.625 10:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:12.625 10:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:12.625 10:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:12.625 10:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:12.625 10:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:12.883 10:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjU5YWE3OWMyNGYxMDg2YjNlYzM3OGYxODIwMjJiNmKZn+Vz: --dhchap-ctrl-secret DHHC-1:02:NWU0MThjMzc4NjM5MDc3MjllMThkM2EwMTg2YjVmZWQxMTRmNGVhMjhkOGY1NzE0Y8QjJw==: 00:14:12.883 10:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NjU5YWE3OWMyNGYxMDg2YjNlYzM3OGYxODIwMjJiNmKZn+Vz: --dhchap-ctrl-secret DHHC-1:02:NWU0MThjMzc4NjM5MDc3MjllMThkM2EwMTg2YjVmZWQxMTRmNGVhMjhkOGY1NzE0Y8QjJw==: 00:14:13.448 10:42:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:13.448 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:13.448 10:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:13.448 10:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.448 10:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.448 10:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.448 10:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:13.448 10:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:13.448 10:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:13.707 10:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:14:13.707 10:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:13.707 10:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:13.707 10:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:13.707 10:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:13.707 10:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:13.707 10:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:13.707 10:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.707 10:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.707 10:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.707 10:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:13.707 10:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:13.707 10:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:13.966 00:14:13.966 10:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:13.966 10:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:13.966 10:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.225 10:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:14.225 10:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:14.225 10:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.225 10:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.225 10:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.225 10:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:14.225 { 00:14:14.225 "cntlid": 69, 00:14:14.225 "qid": 0, 00:14:14.225 "state": "enabled", 00:14:14.225 "thread": "nvmf_tgt_poll_group_000", 00:14:14.225 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:14.225 "listen_address": { 00:14:14.225 "trtype": "RDMA", 00:14:14.225 "adrfam": "IPv4", 00:14:14.225 "traddr": "192.168.100.8", 00:14:14.225 "trsvcid": "4420" 00:14:14.225 }, 00:14:14.225 "peer_address": { 00:14:14.225 "trtype": "RDMA", 00:14:14.225 "adrfam": "IPv4", 00:14:14.225 "traddr": "192.168.100.8", 00:14:14.225 "trsvcid": "50238" 00:14:14.225 }, 00:14:14.225 "auth": { 00:14:14.225 "state": "completed", 00:14:14.225 "digest": "sha384", 00:14:14.225 "dhgroup": "ffdhe3072" 00:14:14.225 } 00:14:14.225 } 00:14:14.225 ]' 00:14:14.225 10:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:14.225 10:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:14.225 10:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:14.225 10:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:14.225 10:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:14.225 10:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:14.225 10:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:14.225 10:42:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:14.484 10:42:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2NhZmY1Nzc4Yzc4Yzk5MTIxYWMwYTNmYTA4ZTUxNzkyNzY0NDk4NDM3MTNhNGUxvR/3DQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjOGVlZTlhMGJiY2M4YTA4NTcxYzEyNDFlNmUxZjDW2pwW: 00:14:14.484 10:42:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Y2NhZmY1Nzc4Yzc4Yzk5MTIxYWMwYTNmYTA4ZTUxNzkyNzY0NDk4NDM3MTNhNGUxvR/3DQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjOGVlZTlhMGJiY2M4YTA4NTcxYzEyNDFlNmUxZjDW2pwW: 00:14:15.048 10:42:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:15.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:15.307 10:42:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:15.307 10:42:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.307 10:42:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.307 10:42:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.307 10:42:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:15.307 10:42:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:15.307 10:42:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:15.307 10:42:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:14:15.307 10:42:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:15.307 10:42:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:15.307 10:42:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:15.307 10:42:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:15.307 10:42:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:15.307 10:42:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:14:15.307 10:42:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.307 10:42:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.307 10:42:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.307 10:42:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:15.307 10:42:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:15.307 10:42:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:15.565 00:14:15.565 10:42:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:15.565 10:42:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:15.565 10:42:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:15.823 10:42:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:15.823 10:42:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:15.823 10:42:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.823 10:42:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.823 10:42:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.823 10:42:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:15.823 { 00:14:15.823 "cntlid": 71, 00:14:15.823 "qid": 0, 00:14:15.823 "state": "enabled", 00:14:15.823 "thread": "nvmf_tgt_poll_group_000", 00:14:15.823 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:15.823 "listen_address": { 00:14:15.823 "trtype": "RDMA", 00:14:15.823 "adrfam": "IPv4", 00:14:15.823 "traddr": "192.168.100.8", 00:14:15.823 "trsvcid": "4420" 00:14:15.823 }, 00:14:15.823 "peer_address": { 00:14:15.823 "trtype": "RDMA", 00:14:15.823 "adrfam": "IPv4", 00:14:15.823 "traddr": "192.168.100.8", 00:14:15.823 "trsvcid": "58911" 00:14:15.823 }, 00:14:15.823 "auth": { 00:14:15.823 "state": "completed", 00:14:15.823 "digest": "sha384", 00:14:15.823 "dhgroup": "ffdhe3072" 00:14:15.823 } 00:14:15.823 } 00:14:15.823 ]' 00:14:15.823 10:42:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:15.823 10:42:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:15.823 10:42:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:15.823 10:42:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:15.823 10:42:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:16.081 10:42:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:16.081 10:42:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:16.081 10:42:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:16.081 10:42:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzg2Y2M4ZmI1NDEyZDU5NzE2YzllZjYyMzVmNzA0YzQxZWQ4Y2IwMzkwMGNiNzEzMWY0NDVkNGQyZGYxYWYxZQqRKoI=: 00:14:16.081 10:42:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:Nzg2Y2M4ZmI1NDEyZDU5NzE2YzllZjYyMzVmNzA0YzQxZWQ4Y2IwMzkwMGNiNzEzMWY0NDVkNGQyZGYxYWYxZQqRKoI=: 00:14:17.016 10:42:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:17.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:17.016 10:42:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:17.016 10:42:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.016 10:42:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.016 10:42:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.016 10:42:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:17.016 10:42:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:17.016 10:42:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:17.016 10:42:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:17.016 10:42:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:14:17.016 10:42:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:17.016 10:42:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:17.016 10:42:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:17.016 10:42:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:17.016 10:42:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:17.016 10:42:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:17.016 10:42:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.016 10:42:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.016 10:42:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.016 10:42:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:17.016 10:42:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:17.016 10:42:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:17.294 00:14:17.294 10:42:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:17.294 10:42:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:17.294 10:42:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:17.552 10:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.552 10:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:17.552 10:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.552 10:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.552 10:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.552 10:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:17.552 { 00:14:17.552 "cntlid": 73, 00:14:17.552 "qid": 0, 00:14:17.552 "state": "enabled", 00:14:17.552 "thread": "nvmf_tgt_poll_group_000", 00:14:17.552 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:17.552 "listen_address": { 00:14:17.552 "trtype": "RDMA", 00:14:17.552 "adrfam": "IPv4", 00:14:17.552 "traddr": "192.168.100.8", 00:14:17.552 "trsvcid": "4420" 00:14:17.552 }, 00:14:17.552 "peer_address": { 00:14:17.552 "trtype": "RDMA", 00:14:17.552 "adrfam": "IPv4", 00:14:17.552 "traddr": "192.168.100.8", 00:14:17.552 "trsvcid": "47779" 00:14:17.552 }, 00:14:17.552 "auth": { 00:14:17.552 "state": "completed", 00:14:17.552 "digest": "sha384", 00:14:17.552 "dhgroup": "ffdhe4096" 00:14:17.552 } 00:14:17.552 } 00:14:17.552 ]' 00:14:17.552 10:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:17.552 10:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:17.552 10:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:17.552 10:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:17.552 10:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:17.810 10:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:17.810 10:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:17.810 10:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:17.810 10:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBkNDIwMDJlNmM2YjRkOGMzZGZiOTc4NGNjZDc3ZDY5MWI4MWM2ZDQ4ZjhkOWZleMhgjw==: --dhchap-ctrl-secret DHHC-1:03:YjJhM2RjMjJhZTI3MDZlZjFjZjE0MmIzYTg3NzJiMDA0NDcyYjI0MTg4N2UwYzkwZWY3ZDMzMTA1NTY2YmQ2ZmCD1ME=: 00:14:17.810 10:42:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MjBkNDIwMDJlNmM2YjRkOGMzZGZiOTc4NGNjZDc3ZDY5MWI4MWM2ZDQ4ZjhkOWZleMhgjw==: --dhchap-ctrl-secret DHHC-1:03:YjJhM2RjMjJhZTI3MDZlZjFjZjE0MmIzYTg3NzJiMDA0NDcyYjI0MTg4N2UwYzkwZWY3ZDMzMTA1NTY2YmQ2ZmCD1ME=: 00:14:18.750 10:42:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:18.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:18.750 10:42:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:18.750 10:42:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.750 10:42:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.750 10:42:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.750 10:42:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:18.750 10:42:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:18.750 10:42:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:18.750 10:42:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:14:18.750 10:42:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:18.750 10:42:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:18.750 10:42:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:18.750 10:42:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:18.750 10:42:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:18.750 10:42:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.750 10:42:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.750 10:42:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.750 10:42:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.750 10:42:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.750 10:42:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.750 10:42:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:19.015 00:14:19.015 10:42:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:19.015 10:42:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:19.015 10:42:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:19.274 10:42:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:19.274 10:42:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:19.274 10:42:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.274 10:42:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.274 10:42:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.274 10:42:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:19.274 { 00:14:19.274 "cntlid": 75, 00:14:19.274 "qid": 0, 00:14:19.274 "state": "enabled", 00:14:19.274 "thread": "nvmf_tgt_poll_group_000", 00:14:19.274 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:19.274 "listen_address": { 00:14:19.274 "trtype": "RDMA", 00:14:19.274 "adrfam": "IPv4", 00:14:19.274 "traddr": "192.168.100.8", 00:14:19.274 "trsvcid": "4420" 00:14:19.274 }, 00:14:19.274 "peer_address": { 00:14:19.274 "trtype": "RDMA", 00:14:19.274 "adrfam": "IPv4", 00:14:19.274 "traddr": "192.168.100.8", 00:14:19.274 "trsvcid": "36574" 00:14:19.274 }, 00:14:19.274 "auth": { 00:14:19.274 "state": "completed", 00:14:19.274 "digest": "sha384", 00:14:19.274 "dhgroup": "ffdhe4096" 00:14:19.274 } 00:14:19.274 } 00:14:19.274 ]' 00:14:19.274 10:42:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:19.274 10:42:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:19.274 10:42:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:19.533 10:42:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:19.533 10:42:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:19.533 10:42:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:19.533 10:42:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:19.533 10:42:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:19.792 10:42:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjU5YWE3OWMyNGYxMDg2YjNlYzM3OGYxODIwMjJiNmKZn+Vz: --dhchap-ctrl-secret DHHC-1:02:NWU0MThjMzc4NjM5MDc3MjllMThkM2EwMTg2YjVmZWQxMTRmNGVhMjhkOGY1NzE0Y8QjJw==: 00:14:19.792 10:42:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NjU5YWE3OWMyNGYxMDg2YjNlYzM3OGYxODIwMjJiNmKZn+Vz: --dhchap-ctrl-secret DHHC-1:02:NWU0MThjMzc4NjM5MDc3MjllMThkM2EwMTg2YjVmZWQxMTRmNGVhMjhkOGY1NzE0Y8QjJw==: 00:14:20.362 10:42:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:20.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:20.362 10:42:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:20.362 10:42:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.362 10:42:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.362 10:42:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.362 10:42:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:20.362 10:42:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:20.362 10:42:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:20.621 10:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:14:20.621 10:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:20.621 10:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:20.621 10:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:20.621 10:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:20.621 10:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:20.621 10:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.621 10:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.621 10:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.621 10:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.621 10:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.621 10:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.621 10:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.880 00:14:20.880 10:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:20.880 10:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:20.880 10:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:21.139 10:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:21.139 10:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:21.139 10:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.139 10:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.139 10:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.139 10:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:21.139 { 00:14:21.139 "cntlid": 77, 00:14:21.139 "qid": 0, 00:14:21.139 "state": "enabled", 00:14:21.139 "thread": "nvmf_tgt_poll_group_000", 00:14:21.139 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:21.139 "listen_address": { 00:14:21.139 "trtype": "RDMA", 00:14:21.139 "adrfam": "IPv4", 00:14:21.139 "traddr": "192.168.100.8", 00:14:21.139 "trsvcid": "4420" 00:14:21.139 }, 00:14:21.139 "peer_address": { 00:14:21.139 "trtype": "RDMA", 00:14:21.139 "adrfam": "IPv4", 00:14:21.139 "traddr": "192.168.100.8", 00:14:21.139 "trsvcid": "47596" 00:14:21.139 }, 00:14:21.139 "auth": { 00:14:21.139 "state": "completed", 00:14:21.139 "digest": "sha384", 00:14:21.139 "dhgroup": "ffdhe4096" 00:14:21.139 } 00:14:21.139 } 00:14:21.139 ]' 00:14:21.139 10:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:21.139 10:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:21.139 10:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:21.139 10:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:21.139 10:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:21.140 10:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:21.140 10:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:21.140 10:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:21.399 10:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2NhZmY1Nzc4Yzc4Yzk5MTIxYWMwYTNmYTA4ZTUxNzkyNzY0NDk4NDM3MTNhNGUxvR/3DQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjOGVlZTlhMGJiY2M4YTA4NTcxYzEyNDFlNmUxZjDW2pwW: 00:14:21.399 10:42:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Y2NhZmY1Nzc4Yzc4Yzk5MTIxYWMwYTNmYTA4ZTUxNzkyNzY0NDk4NDM3MTNhNGUxvR/3DQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjOGVlZTlhMGJiY2M4YTA4NTcxYzEyNDFlNmUxZjDW2pwW: 00:14:21.967 10:42:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:22.225 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:22.225 10:42:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:22.225 10:42:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.225 10:42:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.225 10:42:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.225 10:42:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:22.225 10:42:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:22.225 10:42:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:22.484 10:42:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:14:22.484 10:42:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:22.484 10:42:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:22.484 10:42:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:22.484 10:42:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:22.484 10:42:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:22.484 10:42:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:14:22.484 10:42:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.484 10:42:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.484 10:42:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.484 10:42:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:22.484 10:42:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:22.484 10:42:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:22.744 00:14:22.744 10:42:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:22.744 10:42:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:22.744 10:42:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.744 10:42:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.744 10:42:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:22.744 10:42:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.744 10:42:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.003 10:42:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.003 10:42:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:23.003 { 00:14:23.003 "cntlid": 79, 00:14:23.003 "qid": 0, 00:14:23.003 "state": "enabled", 00:14:23.003 "thread": "nvmf_tgt_poll_group_000", 00:14:23.003 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:23.003 "listen_address": { 00:14:23.003 "trtype": "RDMA", 00:14:23.003 "adrfam": "IPv4", 00:14:23.003 "traddr": "192.168.100.8", 00:14:23.003 "trsvcid": "4420" 00:14:23.003 }, 00:14:23.003 "peer_address": { 00:14:23.003 "trtype": "RDMA", 00:14:23.003 "adrfam": "IPv4", 00:14:23.003 "traddr": "192.168.100.8", 00:14:23.003 "trsvcid": "32967" 00:14:23.003 }, 00:14:23.003 "auth": { 00:14:23.003 "state": "completed", 00:14:23.003 "digest": "sha384", 00:14:23.003 "dhgroup": "ffdhe4096" 00:14:23.003 } 00:14:23.003 } 00:14:23.003 ]' 00:14:23.003 10:42:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:23.003 10:42:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:23.003 10:42:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:23.003 10:42:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:23.003 10:42:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:23.003 10:42:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:23.003 10:42:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:23.003 10:42:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.262 10:42:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzg2Y2M4ZmI1NDEyZDU5NzE2YzllZjYyMzVmNzA0YzQxZWQ4Y2IwMzkwMGNiNzEzMWY0NDVkNGQyZGYxYWYxZQqRKoI=: 00:14:23.262 10:42:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:Nzg2Y2M4ZmI1NDEyZDU5NzE2YzllZjYyMzVmNzA0YzQxZWQ4Y2IwMzkwMGNiNzEzMWY0NDVkNGQyZGYxYWYxZQqRKoI=: 00:14:23.830 10:42:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:23.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:23.830 10:42:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:23.830 10:42:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.830 10:42:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.830 10:42:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.830 10:42:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:23.830 10:42:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:23.830 10:42:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:23.830 10:42:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:24.090 10:42:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:14:24.090 10:42:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:24.090 10:42:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:24.090 10:42:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:24.090 10:42:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:24.090 10:42:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:24.090 10:42:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:24.090 10:42:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.090 10:42:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.090 10:42:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.090 10:42:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:24.090 10:42:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:24.090 10:42:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:24.658 00:14:24.658 10:42:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:24.658 10:42:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:24.658 10:42:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:24.658 10:42:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.658 10:42:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:24.658 10:42:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.658 10:42:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.658 10:42:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.658 10:42:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:24.658 { 00:14:24.658 "cntlid": 81, 00:14:24.658 "qid": 0, 00:14:24.658 "state": "enabled", 00:14:24.658 "thread": "nvmf_tgt_poll_group_000", 00:14:24.658 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:24.658 "listen_address": { 00:14:24.658 "trtype": "RDMA", 00:14:24.658 "adrfam": "IPv4", 00:14:24.658 "traddr": "192.168.100.8", 00:14:24.658 "trsvcid": "4420" 00:14:24.658 }, 00:14:24.658 "peer_address": { 00:14:24.658 "trtype": "RDMA", 00:14:24.658 "adrfam": "IPv4", 00:14:24.658 "traddr": "192.168.100.8", 00:14:24.658 "trsvcid": "33621" 00:14:24.658 }, 00:14:24.658 "auth": { 00:14:24.658 "state": "completed", 00:14:24.658 "digest": "sha384", 00:14:24.658 "dhgroup": "ffdhe6144" 00:14:24.658 } 00:14:24.658 } 00:14:24.658 ]' 00:14:24.658 10:42:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:24.659 10:42:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:24.659 10:42:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:24.918 10:42:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:24.918 10:42:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:24.918 10:42:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:24.918 10:42:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:24.918 10:42:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:24.918 10:42:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBkNDIwMDJlNmM2YjRkOGMzZGZiOTc4NGNjZDc3ZDY5MWI4MWM2ZDQ4ZjhkOWZleMhgjw==: --dhchap-ctrl-secret DHHC-1:03:YjJhM2RjMjJhZTI3MDZlZjFjZjE0MmIzYTg3NzJiMDA0NDcyYjI0MTg4N2UwYzkwZWY3ZDMzMTA1NTY2YmQ2ZmCD1ME=: 00:14:24.918 10:42:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MjBkNDIwMDJlNmM2YjRkOGMzZGZiOTc4NGNjZDc3ZDY5MWI4MWM2ZDQ4ZjhkOWZleMhgjw==: --dhchap-ctrl-secret DHHC-1:03:YjJhM2RjMjJhZTI3MDZlZjFjZjE0MmIzYTg3NzJiMDA0NDcyYjI0MTg4N2UwYzkwZWY3ZDMzMTA1NTY2YmQ2ZmCD1ME=: 00:14:25.855 10:42:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:25.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:25.855 10:42:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:25.855 10:42:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.855 10:42:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.855 10:42:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.855 10:42:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:25.855 10:42:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:25.855 10:42:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:25.855 10:42:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:14:25.855 10:42:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:25.855 10:42:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:25.855 10:42:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:25.855 10:42:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:25.855 10:42:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:25.855 10:42:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.855 10:42:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.855 10:42:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.855 10:42:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.855 10:42:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.855 10:42:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.855 10:42:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:26.433 00:14:26.433 10:42:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:26.433 10:42:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:26.433 10:42:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.433 10:42:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.433 10:42:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.433 10:42:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.433 10:42:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.433 10:42:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.433 10:42:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:26.433 { 00:14:26.433 "cntlid": 83, 00:14:26.433 "qid": 0, 00:14:26.433 "state": "enabled", 00:14:26.433 "thread": "nvmf_tgt_poll_group_000", 00:14:26.433 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:26.433 "listen_address": { 00:14:26.433 "trtype": "RDMA", 00:14:26.433 "adrfam": "IPv4", 00:14:26.433 "traddr": "192.168.100.8", 00:14:26.433 "trsvcid": "4420" 00:14:26.433 }, 00:14:26.433 "peer_address": { 00:14:26.433 "trtype": "RDMA", 00:14:26.433 "adrfam": "IPv4", 00:14:26.433 "traddr": "192.168.100.8", 00:14:26.433 "trsvcid": "43592" 00:14:26.433 }, 00:14:26.433 "auth": { 00:14:26.433 "state": "completed", 00:14:26.433 "digest": "sha384", 00:14:26.433 "dhgroup": "ffdhe6144" 00:14:26.433 } 00:14:26.433 } 00:14:26.433 ]' 00:14:26.433 10:42:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:26.691 10:42:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:26.691 10:42:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:26.691 10:42:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:26.691 10:42:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:26.691 10:42:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:26.691 10:42:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.691 10:42:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:26.950 10:42:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjU5YWE3OWMyNGYxMDg2YjNlYzM3OGYxODIwMjJiNmKZn+Vz: --dhchap-ctrl-secret DHHC-1:02:NWU0MThjMzc4NjM5MDc3MjllMThkM2EwMTg2YjVmZWQxMTRmNGVhMjhkOGY1NzE0Y8QjJw==: 00:14:26.950 10:42:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NjU5YWE3OWMyNGYxMDg2YjNlYzM3OGYxODIwMjJiNmKZn+Vz: --dhchap-ctrl-secret DHHC-1:02:NWU0MThjMzc4NjM5MDc3MjllMThkM2EwMTg2YjVmZWQxMTRmNGVhMjhkOGY1NzE0Y8QjJw==: 00:14:27.518 10:42:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:27.518 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:27.518 10:42:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:27.518 10:42:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.518 10:42:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.518 10:42:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.518 10:42:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:27.518 10:42:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:27.519 10:42:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:27.777 10:42:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:14:27.778 10:42:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:27.778 10:42:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:27.778 10:42:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:27.778 10:42:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:27.778 10:42:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:27.778 10:42:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:27.778 10:42:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.778 10:42:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.778 10:42:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.778 10:42:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:27.778 10:42:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:27.778 10:42:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:28.036 00:14:28.296 10:42:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:28.296 10:42:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:28.296 10:42:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:28.296 10:42:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:28.296 10:42:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:28.296 10:42:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.296 10:42:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.296 10:42:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.296 10:42:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:28.296 { 00:14:28.296 "cntlid": 85, 00:14:28.296 "qid": 0, 00:14:28.296 "state": "enabled", 00:14:28.296 "thread": "nvmf_tgt_poll_group_000", 00:14:28.296 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:28.296 "listen_address": { 00:14:28.296 "trtype": "RDMA", 00:14:28.296 "adrfam": "IPv4", 00:14:28.296 "traddr": "192.168.100.8", 00:14:28.296 "trsvcid": "4420" 00:14:28.296 }, 00:14:28.296 "peer_address": { 00:14:28.296 "trtype": "RDMA", 00:14:28.296 "adrfam": "IPv4", 00:14:28.296 "traddr": "192.168.100.8", 00:14:28.296 "trsvcid": "55767" 00:14:28.296 }, 00:14:28.296 "auth": { 00:14:28.296 "state": "completed", 00:14:28.296 "digest": "sha384", 00:14:28.296 "dhgroup": "ffdhe6144" 00:14:28.296 } 00:14:28.296 } 00:14:28.296 ]' 00:14:28.296 10:42:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:28.296 10:42:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:28.296 10:42:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:28.555 10:42:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:28.555 10:42:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:28.555 10:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:28.555 10:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:28.555 10:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:28.814 10:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2NhZmY1Nzc4Yzc4Yzk5MTIxYWMwYTNmYTA4ZTUxNzkyNzY0NDk4NDM3MTNhNGUxvR/3DQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjOGVlZTlhMGJiY2M4YTA4NTcxYzEyNDFlNmUxZjDW2pwW: 00:14:28.814 10:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Y2NhZmY1Nzc4Yzc4Yzk5MTIxYWMwYTNmYTA4ZTUxNzkyNzY0NDk4NDM3MTNhNGUxvR/3DQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjOGVlZTlhMGJiY2M4YTA4NTcxYzEyNDFlNmUxZjDW2pwW: 00:14:29.381 10:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:29.381 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:29.381 10:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:29.381 10:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.381 10:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.381 10:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.381 10:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:29.381 10:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:29.381 10:42:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:29.641 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:14:29.641 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:29.641 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:29.641 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:29.641 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:29.641 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:29.641 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:14:29.641 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.641 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.641 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.641 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:29.641 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:29.641 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:29.900 00:14:29.900 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:29.900 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:29.900 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:30.159 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.159 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.159 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.159 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.159 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.159 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:30.159 { 00:14:30.159 "cntlid": 87, 00:14:30.159 "qid": 0, 00:14:30.159 "state": "enabled", 00:14:30.159 "thread": "nvmf_tgt_poll_group_000", 00:14:30.159 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:30.159 "listen_address": { 00:14:30.159 "trtype": "RDMA", 00:14:30.159 "adrfam": "IPv4", 00:14:30.159 "traddr": "192.168.100.8", 00:14:30.159 "trsvcid": "4420" 00:14:30.159 }, 00:14:30.159 "peer_address": { 00:14:30.159 "trtype": "RDMA", 00:14:30.159 "adrfam": "IPv4", 00:14:30.159 "traddr": "192.168.100.8", 00:14:30.159 "trsvcid": "60322" 00:14:30.159 }, 00:14:30.159 "auth": { 00:14:30.159 "state": "completed", 00:14:30.159 "digest": "sha384", 00:14:30.159 "dhgroup": "ffdhe6144" 00:14:30.159 } 00:14:30.159 } 00:14:30.159 ]' 00:14:30.159 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:30.159 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:30.159 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:30.159 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:30.159 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:30.418 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:30.418 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:30.418 10:42:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:30.418 10:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzg2Y2M4ZmI1NDEyZDU5NzE2YzllZjYyMzVmNzA0YzQxZWQ4Y2IwMzkwMGNiNzEzMWY0NDVkNGQyZGYxYWYxZQqRKoI=: 00:14:30.418 10:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:Nzg2Y2M4ZmI1NDEyZDU5NzE2YzllZjYyMzVmNzA0YzQxZWQ4Y2IwMzkwMGNiNzEzMWY0NDVkNGQyZGYxYWYxZQqRKoI=: 00:14:30.986 10:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:31.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:31.245 10:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:31.245 10:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.245 10:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.245 10:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.245 10:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:31.245 10:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:31.245 10:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:31.245 10:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:31.504 10:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:14:31.504 10:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:31.504 10:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:31.504 10:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:31.504 10:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:31.504 10:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:31.504 10:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:31.504 10:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.504 10:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.504 10:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.504 10:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:31.504 10:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:31.504 10:42:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:32.072 00:14:32.072 10:42:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:32.072 10:42:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.072 10:42:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:32.072 10:42:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.072 10:42:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.072 10:42:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.072 10:42:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.072 10:42:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.072 10:42:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:32.072 { 00:14:32.072 "cntlid": 89, 00:14:32.072 "qid": 0, 00:14:32.072 "state": "enabled", 00:14:32.072 "thread": "nvmf_tgt_poll_group_000", 00:14:32.072 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:32.072 "listen_address": { 00:14:32.072 "trtype": "RDMA", 00:14:32.072 "adrfam": "IPv4", 00:14:32.072 "traddr": "192.168.100.8", 00:14:32.072 "trsvcid": "4420" 00:14:32.072 }, 00:14:32.072 "peer_address": { 00:14:32.072 "trtype": "RDMA", 00:14:32.072 "adrfam": "IPv4", 00:14:32.072 "traddr": "192.168.100.8", 00:14:32.072 "trsvcid": "47555" 00:14:32.072 }, 00:14:32.072 "auth": { 00:14:32.072 "state": "completed", 00:14:32.072 "digest": "sha384", 00:14:32.072 "dhgroup": "ffdhe8192" 00:14:32.072 } 00:14:32.072 } 00:14:32.072 ]' 00:14:32.072 10:42:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:32.072 10:42:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:32.072 10:42:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:32.331 10:42:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:32.331 10:42:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:32.331 10:42:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:32.331 10:42:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:32.331 10:42:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:32.331 10:42:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBkNDIwMDJlNmM2YjRkOGMzZGZiOTc4NGNjZDc3ZDY5MWI4MWM2ZDQ4ZjhkOWZleMhgjw==: --dhchap-ctrl-secret DHHC-1:03:YjJhM2RjMjJhZTI3MDZlZjFjZjE0MmIzYTg3NzJiMDA0NDcyYjI0MTg4N2UwYzkwZWY3ZDMzMTA1NTY2YmQ2ZmCD1ME=: 00:14:32.331 10:42:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MjBkNDIwMDJlNmM2YjRkOGMzZGZiOTc4NGNjZDc3ZDY5MWI4MWM2ZDQ4ZjhkOWZleMhgjw==: --dhchap-ctrl-secret DHHC-1:03:YjJhM2RjMjJhZTI3MDZlZjFjZjE0MmIzYTg3NzJiMDA0NDcyYjI0MTg4N2UwYzkwZWY3ZDMzMTA1NTY2YmQ2ZmCD1ME=: 00:14:33.267 10:43:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.267 10:43:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:33.267 10:43:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.267 10:43:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.267 10:43:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.267 10:43:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:33.267 10:43:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:33.267 10:43:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:33.526 10:43:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:14:33.526 10:43:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:33.526 10:43:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:33.526 10:43:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:33.526 10:43:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:33.526 10:43:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:33.526 10:43:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:33.526 10:43:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.526 10:43:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.526 10:43:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.526 10:43:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:33.526 10:43:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:33.526 10:43:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:33.785 00:14:33.785 10:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:33.785 10:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:33.785 10:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.044 10:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.044 10:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.044 10:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.044 10:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.044 10:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.044 10:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:34.044 { 00:14:34.044 "cntlid": 91, 00:14:34.044 "qid": 0, 00:14:34.044 "state": "enabled", 00:14:34.044 "thread": "nvmf_tgt_poll_group_000", 00:14:34.044 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:34.044 "listen_address": { 00:14:34.044 "trtype": "RDMA", 00:14:34.044 "adrfam": "IPv4", 00:14:34.044 "traddr": "192.168.100.8", 00:14:34.044 "trsvcid": "4420" 00:14:34.044 }, 00:14:34.044 "peer_address": { 00:14:34.044 "trtype": "RDMA", 00:14:34.044 "adrfam": "IPv4", 00:14:34.044 "traddr": "192.168.100.8", 00:14:34.044 "trsvcid": "47985" 00:14:34.044 }, 00:14:34.044 "auth": { 00:14:34.044 "state": "completed", 00:14:34.044 "digest": "sha384", 00:14:34.044 "dhgroup": "ffdhe8192" 00:14:34.044 } 00:14:34.044 } 00:14:34.044 ]' 00:14:34.044 10:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:34.044 10:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:34.044 10:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:34.044 10:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:34.044 10:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:34.304 10:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:34.304 10:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.304 10:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:34.304 10:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjU5YWE3OWMyNGYxMDg2YjNlYzM3OGYxODIwMjJiNmKZn+Vz: --dhchap-ctrl-secret DHHC-1:02:NWU0MThjMzc4NjM5MDc3MjllMThkM2EwMTg2YjVmZWQxMTRmNGVhMjhkOGY1NzE0Y8QjJw==: 00:14:34.304 10:43:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NjU5YWE3OWMyNGYxMDg2YjNlYzM3OGYxODIwMjJiNmKZn+Vz: --dhchap-ctrl-secret DHHC-1:02:NWU0MThjMzc4NjM5MDc3MjllMThkM2EwMTg2YjVmZWQxMTRmNGVhMjhkOGY1NzE0Y8QjJw==: 00:14:35.241 10:43:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:35.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:35.241 10:43:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:35.241 10:43:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.241 10:43:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.241 10:43:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.241 10:43:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:35.241 10:43:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:35.241 10:43:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:35.241 10:43:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:14:35.241 10:43:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:35.241 10:43:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:35.241 10:43:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:35.241 10:43:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:35.242 10:43:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:35.242 10:43:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:35.242 10:43:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.242 10:43:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.242 10:43:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.242 10:43:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:35.242 10:43:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:35.242 10:43:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:35.809 00:14:35.809 10:43:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:35.809 10:43:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:35.809 10:43:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.068 10:43:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:36.068 10:43:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:36.068 10:43:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.068 10:43:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.068 10:43:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.068 10:43:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:36.068 { 00:14:36.068 "cntlid": 93, 00:14:36.068 "qid": 0, 00:14:36.068 "state": "enabled", 00:14:36.068 "thread": "nvmf_tgt_poll_group_000", 00:14:36.068 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:36.068 "listen_address": { 00:14:36.068 "trtype": "RDMA", 00:14:36.068 "adrfam": "IPv4", 00:14:36.068 "traddr": "192.168.100.8", 00:14:36.068 "trsvcid": "4420" 00:14:36.068 }, 00:14:36.068 "peer_address": { 00:14:36.068 "trtype": "RDMA", 00:14:36.068 "adrfam": "IPv4", 00:14:36.068 "traddr": "192.168.100.8", 00:14:36.068 "trsvcid": "59055" 00:14:36.068 }, 00:14:36.068 "auth": { 00:14:36.068 "state": "completed", 00:14:36.068 "digest": "sha384", 00:14:36.068 "dhgroup": "ffdhe8192" 00:14:36.068 } 00:14:36.068 } 00:14:36.068 ]' 00:14:36.068 10:43:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:36.068 10:43:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:36.068 10:43:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:36.068 10:43:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:36.068 10:43:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:36.068 10:43:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:36.068 10:43:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:36.068 10:43:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:36.327 10:43:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2NhZmY1Nzc4Yzc4Yzk5MTIxYWMwYTNmYTA4ZTUxNzkyNzY0NDk4NDM3MTNhNGUxvR/3DQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjOGVlZTlhMGJiY2M4YTA4NTcxYzEyNDFlNmUxZjDW2pwW: 00:14:36.327 10:43:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Y2NhZmY1Nzc4Yzc4Yzk5MTIxYWMwYTNmYTA4ZTUxNzkyNzY0NDk4NDM3MTNhNGUxvR/3DQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjOGVlZTlhMGJiY2M4YTA4NTcxYzEyNDFlNmUxZjDW2pwW: 00:14:36.893 10:43:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:37.153 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:37.153 10:43:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:37.153 10:43:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.153 10:43:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.153 10:43:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.153 10:43:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:37.153 10:43:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:37.153 10:43:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:37.153 10:43:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:14:37.153 10:43:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:37.153 10:43:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:37.153 10:43:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:37.153 10:43:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:37.153 10:43:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:37.153 10:43:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:14:37.153 10:43:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.153 10:43:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.153 10:43:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.153 10:43:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:37.153 10:43:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:37.153 10:43:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:37.721 00:14:37.721 10:43:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:37.721 10:43:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:37.721 10:43:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.980 10:43:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.980 10:43:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.980 10:43:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.980 10:43:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.980 10:43:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.980 10:43:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:37.980 { 00:14:37.980 "cntlid": 95, 00:14:37.980 "qid": 0, 00:14:37.980 "state": "enabled", 00:14:37.980 "thread": "nvmf_tgt_poll_group_000", 00:14:37.980 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:37.980 "listen_address": { 00:14:37.980 "trtype": "RDMA", 00:14:37.980 "adrfam": "IPv4", 00:14:37.980 "traddr": "192.168.100.8", 00:14:37.980 "trsvcid": "4420" 00:14:37.980 }, 00:14:37.980 "peer_address": { 00:14:37.980 "trtype": "RDMA", 00:14:37.980 "adrfam": "IPv4", 00:14:37.980 "traddr": "192.168.100.8", 00:14:37.980 "trsvcid": "48991" 00:14:37.980 }, 00:14:37.980 "auth": { 00:14:37.980 "state": "completed", 00:14:37.980 "digest": "sha384", 00:14:37.980 "dhgroup": "ffdhe8192" 00:14:37.980 } 00:14:37.980 } 00:14:37.980 ]' 00:14:37.980 10:43:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:37.980 10:43:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:37.980 10:43:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:37.980 10:43:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:37.980 10:43:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:37.980 10:43:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.980 10:43:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.980 10:43:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:38.239 10:43:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzg2Y2M4ZmI1NDEyZDU5NzE2YzllZjYyMzVmNzA0YzQxZWQ4Y2IwMzkwMGNiNzEzMWY0NDVkNGQyZGYxYWYxZQqRKoI=: 00:14:38.239 10:43:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:Nzg2Y2M4ZmI1NDEyZDU5NzE2YzllZjYyMzVmNzA0YzQxZWQ4Y2IwMzkwMGNiNzEzMWY0NDVkNGQyZGYxYWYxZQqRKoI=: 00:14:38.807 10:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:39.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:39.065 10:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:39.065 10:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.065 10:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.066 10:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.066 10:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:39.066 10:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:39.066 10:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:39.066 10:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:39.066 10:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:39.066 10:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:14:39.066 10:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:39.066 10:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:39.066 10:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:39.066 10:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:39.066 10:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:39.066 10:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:39.066 10:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.066 10:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.066 10:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.066 10:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:39.066 10:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:39.066 10:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:39.404 00:14:39.404 10:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:39.404 10:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.404 10:43:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:39.688 10:43:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.688 10:43:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.688 10:43:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.688 10:43:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.688 10:43:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.688 10:43:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:39.688 { 00:14:39.688 "cntlid": 97, 00:14:39.688 "qid": 0, 00:14:39.688 "state": "enabled", 00:14:39.688 "thread": "nvmf_tgt_poll_group_000", 00:14:39.688 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:39.688 "listen_address": { 00:14:39.688 "trtype": "RDMA", 00:14:39.688 "adrfam": "IPv4", 00:14:39.688 "traddr": "192.168.100.8", 00:14:39.688 "trsvcid": "4420" 00:14:39.688 }, 00:14:39.688 "peer_address": { 00:14:39.688 "trtype": "RDMA", 00:14:39.688 "adrfam": "IPv4", 00:14:39.688 "traddr": "192.168.100.8", 00:14:39.688 "trsvcid": "56433" 00:14:39.688 }, 00:14:39.688 "auth": { 00:14:39.688 "state": "completed", 00:14:39.688 "digest": "sha512", 00:14:39.688 "dhgroup": "null" 00:14:39.688 } 00:14:39.688 } 00:14:39.688 ]' 00:14:39.688 10:43:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:39.688 10:43:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:39.688 10:43:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:39.688 10:43:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:39.688 10:43:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:39.688 10:43:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.688 10:43:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.688 10:43:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.947 10:43:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBkNDIwMDJlNmM2YjRkOGMzZGZiOTc4NGNjZDc3ZDY5MWI4MWM2ZDQ4ZjhkOWZleMhgjw==: --dhchap-ctrl-secret DHHC-1:03:YjJhM2RjMjJhZTI3MDZlZjFjZjE0MmIzYTg3NzJiMDA0NDcyYjI0MTg4N2UwYzkwZWY3ZDMzMTA1NTY2YmQ2ZmCD1ME=: 00:14:39.947 10:43:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MjBkNDIwMDJlNmM2YjRkOGMzZGZiOTc4NGNjZDc3ZDY5MWI4MWM2ZDQ4ZjhkOWZleMhgjw==: --dhchap-ctrl-secret DHHC-1:03:YjJhM2RjMjJhZTI3MDZlZjFjZjE0MmIzYTg3NzJiMDA0NDcyYjI0MTg4N2UwYzkwZWY3ZDMzMTA1NTY2YmQ2ZmCD1ME=: 00:14:40.523 10:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:40.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:40.784 10:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:40.784 10:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.784 10:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.784 10:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.784 10:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:40.784 10:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:40.784 10:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:41.043 10:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:14:41.043 10:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:41.043 10:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:41.043 10:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:41.043 10:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:41.043 10:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.043 10:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:41.043 10:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.043 10:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.043 10:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.043 10:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:41.043 10:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:41.043 10:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:41.043 00:14:41.303 10:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:41.303 10:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:41.303 10:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:41.303 10:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:41.303 10:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:41.303 10:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.303 10:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.303 10:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.303 10:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:41.303 { 00:14:41.303 "cntlid": 99, 00:14:41.303 "qid": 0, 00:14:41.303 "state": "enabled", 00:14:41.303 "thread": "nvmf_tgt_poll_group_000", 00:14:41.303 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:41.303 "listen_address": { 00:14:41.303 "trtype": "RDMA", 00:14:41.303 "adrfam": "IPv4", 00:14:41.303 "traddr": "192.168.100.8", 00:14:41.303 "trsvcid": "4420" 00:14:41.303 }, 00:14:41.303 "peer_address": { 00:14:41.303 "trtype": "RDMA", 00:14:41.303 "adrfam": "IPv4", 00:14:41.303 "traddr": "192.168.100.8", 00:14:41.303 "trsvcid": "49686" 00:14:41.303 }, 00:14:41.303 "auth": { 00:14:41.303 "state": "completed", 00:14:41.303 "digest": "sha512", 00:14:41.303 "dhgroup": "null" 00:14:41.303 } 00:14:41.303 } 00:14:41.303 ]' 00:14:41.303 10:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:41.563 10:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:41.563 10:43:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:41.563 10:43:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:41.563 10:43:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:41.563 10:43:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:41.563 10:43:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:41.563 10:43:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:41.822 10:43:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjU5YWE3OWMyNGYxMDg2YjNlYzM3OGYxODIwMjJiNmKZn+Vz: --dhchap-ctrl-secret DHHC-1:02:NWU0MThjMzc4NjM5MDc3MjllMThkM2EwMTg2YjVmZWQxMTRmNGVhMjhkOGY1NzE0Y8QjJw==: 00:14:41.822 10:43:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NjU5YWE3OWMyNGYxMDg2YjNlYzM3OGYxODIwMjJiNmKZn+Vz: --dhchap-ctrl-secret DHHC-1:02:NWU0MThjMzc4NjM5MDc3MjllMThkM2EwMTg2YjVmZWQxMTRmNGVhMjhkOGY1NzE0Y8QjJw==: 00:14:42.390 10:43:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:42.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:42.390 10:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:42.390 10:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.390 10:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.390 10:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.390 10:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:42.390 10:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:42.390 10:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:42.649 10:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:14:42.649 10:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:42.649 10:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:42.649 10:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:42.649 10:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:42.649 10:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:42.649 10:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:42.649 10:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.649 10:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.649 10:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.649 10:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:42.649 10:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:42.649 10:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:42.909 00:14:42.909 10:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:42.909 10:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.909 10:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:43.168 10:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:43.168 10:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:43.168 10:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.168 10:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.168 10:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.168 10:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:43.168 { 00:14:43.168 "cntlid": 101, 00:14:43.168 "qid": 0, 00:14:43.168 "state": "enabled", 00:14:43.168 "thread": "nvmf_tgt_poll_group_000", 00:14:43.168 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:43.168 "listen_address": { 00:14:43.168 "trtype": "RDMA", 00:14:43.168 "adrfam": "IPv4", 00:14:43.168 "traddr": "192.168.100.8", 00:14:43.168 "trsvcid": "4420" 00:14:43.168 }, 00:14:43.168 "peer_address": { 00:14:43.168 "trtype": "RDMA", 00:14:43.168 "adrfam": "IPv4", 00:14:43.168 "traddr": "192.168.100.8", 00:14:43.168 "trsvcid": "58027" 00:14:43.168 }, 00:14:43.168 "auth": { 00:14:43.168 "state": "completed", 00:14:43.168 "digest": "sha512", 00:14:43.168 "dhgroup": "null" 00:14:43.168 } 00:14:43.168 } 00:14:43.168 ]' 00:14:43.168 10:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:43.168 10:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:43.168 10:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:43.168 10:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:43.168 10:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:43.168 10:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:43.168 10:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:43.168 10:43:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:43.427 10:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2NhZmY1Nzc4Yzc4Yzk5MTIxYWMwYTNmYTA4ZTUxNzkyNzY0NDk4NDM3MTNhNGUxvR/3DQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjOGVlZTlhMGJiY2M4YTA4NTcxYzEyNDFlNmUxZjDW2pwW: 00:14:43.427 10:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Y2NhZmY1Nzc4Yzc4Yzk5MTIxYWMwYTNmYTA4ZTUxNzkyNzY0NDk4NDM3MTNhNGUxvR/3DQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjOGVlZTlhMGJiY2M4YTA4NTcxYzEyNDFlNmUxZjDW2pwW: 00:14:43.995 10:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:44.254 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:44.254 10:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:44.254 10:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.254 10:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.254 10:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.254 10:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:44.254 10:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:44.254 10:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:44.254 10:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:14:44.254 10:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:44.254 10:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:44.254 10:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:44.254 10:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:44.254 10:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:44.254 10:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:14:44.254 10:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.254 10:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.254 10:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.255 10:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:44.255 10:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:44.255 10:43:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:44.514 00:14:44.514 10:43:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:44.514 10:43:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:44.514 10:43:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.773 10:43:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.773 10:43:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.773 10:43:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.773 10:43:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.774 10:43:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.774 10:43:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:44.774 { 00:14:44.774 "cntlid": 103, 00:14:44.774 "qid": 0, 00:14:44.774 "state": "enabled", 00:14:44.774 "thread": "nvmf_tgt_poll_group_000", 00:14:44.774 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:44.774 "listen_address": { 00:14:44.774 "trtype": "RDMA", 00:14:44.774 "adrfam": "IPv4", 00:14:44.774 "traddr": "192.168.100.8", 00:14:44.774 "trsvcid": "4420" 00:14:44.774 }, 00:14:44.774 "peer_address": { 00:14:44.774 "trtype": "RDMA", 00:14:44.774 "adrfam": "IPv4", 00:14:44.774 "traddr": "192.168.100.8", 00:14:44.774 "trsvcid": "60056" 00:14:44.774 }, 00:14:44.774 "auth": { 00:14:44.774 "state": "completed", 00:14:44.774 "digest": "sha512", 00:14:44.774 "dhgroup": "null" 00:14:44.774 } 00:14:44.774 } 00:14:44.774 ]' 00:14:44.774 10:43:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:44.774 10:43:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:44.774 10:43:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:45.033 10:43:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:45.033 10:43:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:45.033 10:43:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:45.033 10:43:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:45.033 10:43:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:45.033 10:43:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzg2Y2M4ZmI1NDEyZDU5NzE2YzllZjYyMzVmNzA0YzQxZWQ4Y2IwMzkwMGNiNzEzMWY0NDVkNGQyZGYxYWYxZQqRKoI=: 00:14:45.033 10:43:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:Nzg2Y2M4ZmI1NDEyZDU5NzE2YzllZjYyMzVmNzA0YzQxZWQ4Y2IwMzkwMGNiNzEzMWY0NDVkNGQyZGYxYWYxZQqRKoI=: 00:14:45.970 10:43:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.970 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.970 10:43:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:45.970 10:43:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.970 10:43:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.970 10:43:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.970 10:43:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:45.970 10:43:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:45.970 10:43:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:45.970 10:43:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:45.970 10:43:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:14:45.970 10:43:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:45.970 10:43:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:45.970 10:43:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:45.970 10:43:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:45.970 10:43:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.970 10:43:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.970 10:43:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.970 10:43:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.970 10:43:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.970 10:43:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.970 10:43:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.970 10:43:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:46.230 00:14:46.230 10:43:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:46.230 10:43:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:46.230 10:43:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.489 10:43:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.489 10:43:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.489 10:43:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.489 10:43:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.489 10:43:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.489 10:43:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:46.489 { 00:14:46.489 "cntlid": 105, 00:14:46.489 "qid": 0, 00:14:46.489 "state": "enabled", 00:14:46.489 "thread": "nvmf_tgt_poll_group_000", 00:14:46.489 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:46.489 "listen_address": { 00:14:46.489 "trtype": "RDMA", 00:14:46.489 "adrfam": "IPv4", 00:14:46.489 "traddr": "192.168.100.8", 00:14:46.489 "trsvcid": "4420" 00:14:46.489 }, 00:14:46.489 "peer_address": { 00:14:46.489 "trtype": "RDMA", 00:14:46.489 "adrfam": "IPv4", 00:14:46.489 "traddr": "192.168.100.8", 00:14:46.489 "trsvcid": "34906" 00:14:46.489 }, 00:14:46.489 "auth": { 00:14:46.489 "state": "completed", 00:14:46.489 "digest": "sha512", 00:14:46.489 "dhgroup": "ffdhe2048" 00:14:46.489 } 00:14:46.489 } 00:14:46.489 ]' 00:14:46.489 10:43:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:46.489 10:43:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:46.489 10:43:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:46.489 10:43:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:46.489 10:43:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:46.748 10:43:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:46.748 10:43:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:46.748 10:43:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:46.748 10:43:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBkNDIwMDJlNmM2YjRkOGMzZGZiOTc4NGNjZDc3ZDY5MWI4MWM2ZDQ4ZjhkOWZleMhgjw==: --dhchap-ctrl-secret DHHC-1:03:YjJhM2RjMjJhZTI3MDZlZjFjZjE0MmIzYTg3NzJiMDA0NDcyYjI0MTg4N2UwYzkwZWY3ZDMzMTA1NTY2YmQ2ZmCD1ME=: 00:14:46.748 10:43:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MjBkNDIwMDJlNmM2YjRkOGMzZGZiOTc4NGNjZDc3ZDY5MWI4MWM2ZDQ4ZjhkOWZleMhgjw==: --dhchap-ctrl-secret DHHC-1:03:YjJhM2RjMjJhZTI3MDZlZjFjZjE0MmIzYTg3NzJiMDA0NDcyYjI0MTg4N2UwYzkwZWY3ZDMzMTA1NTY2YmQ2ZmCD1ME=: 00:14:47.316 10:43:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.575 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.575 10:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:47.575 10:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.575 10:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.575 10:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.575 10:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:47.575 10:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:47.575 10:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:47.834 10:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:14:47.835 10:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:47.835 10:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:47.835 10:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:47.835 10:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:47.835 10:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.835 10:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.835 10:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.835 10:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.835 10:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.835 10:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.835 10:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.835 10:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:48.094 00:14:48.094 10:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:48.094 10:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:48.094 10:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.094 10:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.094 10:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.094 10:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.094 10:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.094 10:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.094 10:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:48.094 { 00:14:48.094 "cntlid": 107, 00:14:48.094 "qid": 0, 00:14:48.094 "state": "enabled", 00:14:48.094 "thread": "nvmf_tgt_poll_group_000", 00:14:48.094 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:48.094 "listen_address": { 00:14:48.094 "trtype": "RDMA", 00:14:48.094 "adrfam": "IPv4", 00:14:48.094 "traddr": "192.168.100.8", 00:14:48.094 "trsvcid": "4420" 00:14:48.094 }, 00:14:48.094 "peer_address": { 00:14:48.094 "trtype": "RDMA", 00:14:48.094 "adrfam": "IPv4", 00:14:48.094 "traddr": "192.168.100.8", 00:14:48.094 "trsvcid": "60345" 00:14:48.094 }, 00:14:48.094 "auth": { 00:14:48.094 "state": "completed", 00:14:48.094 "digest": "sha512", 00:14:48.094 "dhgroup": "ffdhe2048" 00:14:48.094 } 00:14:48.094 } 00:14:48.094 ]' 00:14:48.094 10:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:48.353 10:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:48.353 10:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:48.353 10:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:48.353 10:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:48.353 10:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.353 10:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.353 10:43:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.612 10:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjU5YWE3OWMyNGYxMDg2YjNlYzM3OGYxODIwMjJiNmKZn+Vz: --dhchap-ctrl-secret DHHC-1:02:NWU0MThjMzc4NjM5MDc3MjllMThkM2EwMTg2YjVmZWQxMTRmNGVhMjhkOGY1NzE0Y8QjJw==: 00:14:48.612 10:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NjU5YWE3OWMyNGYxMDg2YjNlYzM3OGYxODIwMjJiNmKZn+Vz: --dhchap-ctrl-secret DHHC-1:02:NWU0MThjMzc4NjM5MDc3MjllMThkM2EwMTg2YjVmZWQxMTRmNGVhMjhkOGY1NzE0Y8QjJw==: 00:14:49.181 10:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.181 10:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:49.181 10:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.181 10:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.181 10:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.181 10:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:49.181 10:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:49.181 10:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:49.440 10:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:14:49.440 10:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:49.440 10:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:49.441 10:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:49.441 10:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:49.441 10:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.441 10:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:49.441 10:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.441 10:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.441 10:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.441 10:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:49.441 10:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:49.441 10:43:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:49.700 00:14:49.700 10:43:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:49.700 10:43:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:49.700 10:43:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:49.959 10:43:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:49.959 10:43:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:49.959 10:43:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.959 10:43:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.959 10:43:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.959 10:43:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:49.959 { 00:14:49.959 "cntlid": 109, 00:14:49.959 "qid": 0, 00:14:49.959 "state": "enabled", 00:14:49.959 "thread": "nvmf_tgt_poll_group_000", 00:14:49.959 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:49.959 "listen_address": { 00:14:49.959 "trtype": "RDMA", 00:14:49.959 "adrfam": "IPv4", 00:14:49.959 "traddr": "192.168.100.8", 00:14:49.959 "trsvcid": "4420" 00:14:49.959 }, 00:14:49.959 "peer_address": { 00:14:49.959 "trtype": "RDMA", 00:14:49.959 "adrfam": "IPv4", 00:14:49.959 "traddr": "192.168.100.8", 00:14:49.959 "trsvcid": "34516" 00:14:49.959 }, 00:14:49.959 "auth": { 00:14:49.959 "state": "completed", 00:14:49.959 "digest": "sha512", 00:14:49.959 "dhgroup": "ffdhe2048" 00:14:49.959 } 00:14:49.959 } 00:14:49.959 ]' 00:14:49.959 10:43:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:49.959 10:43:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:49.959 10:43:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:49.959 10:43:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:49.959 10:43:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:49.959 10:43:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:49.959 10:43:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:49.959 10:43:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.218 10:43:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2NhZmY1Nzc4Yzc4Yzk5MTIxYWMwYTNmYTA4ZTUxNzkyNzY0NDk4NDM3MTNhNGUxvR/3DQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjOGVlZTlhMGJiY2M4YTA4NTcxYzEyNDFlNmUxZjDW2pwW: 00:14:50.218 10:43:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Y2NhZmY1Nzc4Yzc4Yzk5MTIxYWMwYTNmYTA4ZTUxNzkyNzY0NDk4NDM3MTNhNGUxvR/3DQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjOGVlZTlhMGJiY2M4YTA4NTcxYzEyNDFlNmUxZjDW2pwW: 00:14:50.786 10:43:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:50.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.046 10:43:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:51.046 10:43:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.046 10:43:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.046 10:43:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.046 10:43:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:51.046 10:43:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:51.046 10:43:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:51.046 10:43:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:14:51.046 10:43:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:51.046 10:43:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:51.046 10:43:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:51.046 10:43:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:51.046 10:43:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:51.046 10:43:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:14:51.046 10:43:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.046 10:43:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.046 10:43:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.046 10:43:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:51.046 10:43:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:51.046 10:43:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:51.305 00:14:51.305 10:43:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:51.305 10:43:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:51.305 10:43:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.564 10:43:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.564 10:43:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.564 10:43:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.564 10:43:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.564 10:43:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.564 10:43:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:51.564 { 00:14:51.564 "cntlid": 111, 00:14:51.564 "qid": 0, 00:14:51.564 "state": "enabled", 00:14:51.564 "thread": "nvmf_tgt_poll_group_000", 00:14:51.564 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:51.564 "listen_address": { 00:14:51.564 "trtype": "RDMA", 00:14:51.564 "adrfam": "IPv4", 00:14:51.564 "traddr": "192.168.100.8", 00:14:51.564 "trsvcid": "4420" 00:14:51.564 }, 00:14:51.564 "peer_address": { 00:14:51.564 "trtype": "RDMA", 00:14:51.564 "adrfam": "IPv4", 00:14:51.564 "traddr": "192.168.100.8", 00:14:51.564 "trsvcid": "47639" 00:14:51.564 }, 00:14:51.564 "auth": { 00:14:51.564 "state": "completed", 00:14:51.564 "digest": "sha512", 00:14:51.564 "dhgroup": "ffdhe2048" 00:14:51.564 } 00:14:51.564 } 00:14:51.564 ]' 00:14:51.565 10:43:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:51.565 10:43:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:51.565 10:43:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:51.565 10:43:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:51.565 10:43:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:51.824 10:43:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.824 10:43:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.824 10:43:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.824 10:43:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzg2Y2M4ZmI1NDEyZDU5NzE2YzllZjYyMzVmNzA0YzQxZWQ4Y2IwMzkwMGNiNzEzMWY0NDVkNGQyZGYxYWYxZQqRKoI=: 00:14:51.824 10:43:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:Nzg2Y2M4ZmI1NDEyZDU5NzE2YzllZjYyMzVmNzA0YzQxZWQ4Y2IwMzkwMGNiNzEzMWY0NDVkNGQyZGYxYWYxZQqRKoI=: 00:14:52.393 10:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.652 10:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:52.652 10:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.652 10:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.652 10:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.652 10:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:52.653 10:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:52.653 10:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:52.653 10:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:52.911 10:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:14:52.911 10:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:52.911 10:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:52.911 10:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:52.911 10:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:52.911 10:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.911 10:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.911 10:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.911 10:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.911 10:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.912 10:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.912 10:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.912 10:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:53.185 00:14:53.185 10:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:53.185 10:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:53.185 10:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:53.185 10:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:53.185 10:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:53.185 10:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.185 10:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.443 10:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.443 10:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:53.443 { 00:14:53.443 "cntlid": 113, 00:14:53.443 "qid": 0, 00:14:53.443 "state": "enabled", 00:14:53.443 "thread": "nvmf_tgt_poll_group_000", 00:14:53.443 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:53.443 "listen_address": { 00:14:53.443 "trtype": "RDMA", 00:14:53.443 "adrfam": "IPv4", 00:14:53.443 "traddr": "192.168.100.8", 00:14:53.443 "trsvcid": "4420" 00:14:53.443 }, 00:14:53.443 "peer_address": { 00:14:53.443 "trtype": "RDMA", 00:14:53.443 "adrfam": "IPv4", 00:14:53.443 "traddr": "192.168.100.8", 00:14:53.443 "trsvcid": "46765" 00:14:53.443 }, 00:14:53.443 "auth": { 00:14:53.443 "state": "completed", 00:14:53.443 "digest": "sha512", 00:14:53.443 "dhgroup": "ffdhe3072" 00:14:53.443 } 00:14:53.443 } 00:14:53.443 ]' 00:14:53.443 10:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:53.443 10:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:53.443 10:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:53.443 10:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:53.443 10:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:53.443 10:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:53.443 10:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.443 10:43:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.702 10:43:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBkNDIwMDJlNmM2YjRkOGMzZGZiOTc4NGNjZDc3ZDY5MWI4MWM2ZDQ4ZjhkOWZleMhgjw==: --dhchap-ctrl-secret DHHC-1:03:YjJhM2RjMjJhZTI3MDZlZjFjZjE0MmIzYTg3NzJiMDA0NDcyYjI0MTg4N2UwYzkwZWY3ZDMzMTA1NTY2YmQ2ZmCD1ME=: 00:14:53.702 10:43:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MjBkNDIwMDJlNmM2YjRkOGMzZGZiOTc4NGNjZDc3ZDY5MWI4MWM2ZDQ4ZjhkOWZleMhgjw==: --dhchap-ctrl-secret DHHC-1:03:YjJhM2RjMjJhZTI3MDZlZjFjZjE0MmIzYTg3NzJiMDA0NDcyYjI0MTg4N2UwYzkwZWY3ZDMzMTA1NTY2YmQ2ZmCD1ME=: 00:14:54.270 10:43:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.270 10:43:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:54.270 10:43:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.270 10:43:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.270 10:43:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.270 10:43:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:54.270 10:43:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:54.270 10:43:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:54.529 10:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:14:54.529 10:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:54.529 10:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:54.529 10:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:54.529 10:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:54.529 10:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:54.529 10:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:54.529 10:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.529 10:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.529 10:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.529 10:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:54.529 10:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:54.529 10:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:54.789 00:14:54.789 10:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:54.789 10:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:54.789 10:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.048 10:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.048 10:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:55.048 10:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.048 10:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.048 10:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.048 10:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:55.048 { 00:14:55.048 "cntlid": 115, 00:14:55.048 "qid": 0, 00:14:55.048 "state": "enabled", 00:14:55.048 "thread": "nvmf_tgt_poll_group_000", 00:14:55.048 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:55.048 "listen_address": { 00:14:55.048 "trtype": "RDMA", 00:14:55.048 "adrfam": "IPv4", 00:14:55.048 "traddr": "192.168.100.8", 00:14:55.048 "trsvcid": "4420" 00:14:55.048 }, 00:14:55.048 "peer_address": { 00:14:55.048 "trtype": "RDMA", 00:14:55.048 "adrfam": "IPv4", 00:14:55.048 "traddr": "192.168.100.8", 00:14:55.048 "trsvcid": "36144" 00:14:55.048 }, 00:14:55.048 "auth": { 00:14:55.048 "state": "completed", 00:14:55.049 "digest": "sha512", 00:14:55.049 "dhgroup": "ffdhe3072" 00:14:55.049 } 00:14:55.049 } 00:14:55.049 ]' 00:14:55.049 10:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:55.049 10:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:55.049 10:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:55.049 10:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:55.049 10:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:55.049 10:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:55.049 10:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:55.049 10:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:55.308 10:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjU5YWE3OWMyNGYxMDg2YjNlYzM3OGYxODIwMjJiNmKZn+Vz: --dhchap-ctrl-secret DHHC-1:02:NWU0MThjMzc4NjM5MDc3MjllMThkM2EwMTg2YjVmZWQxMTRmNGVhMjhkOGY1NzE0Y8QjJw==: 00:14:55.308 10:43:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NjU5YWE3OWMyNGYxMDg2YjNlYzM3OGYxODIwMjJiNmKZn+Vz: --dhchap-ctrl-secret DHHC-1:02:NWU0MThjMzc4NjM5MDc3MjllMThkM2EwMTg2YjVmZWQxMTRmNGVhMjhkOGY1NzE0Y8QjJw==: 00:14:55.875 10:43:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:56.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.135 10:43:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:56.135 10:43:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.135 10:43:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.135 10:43:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.135 10:43:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:56.135 10:43:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:56.135 10:43:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:56.135 10:43:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:14:56.135 10:43:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:56.135 10:43:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:56.135 10:43:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:56.135 10:43:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:56.135 10:43:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:56.135 10:43:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:56.135 10:43:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.135 10:43:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.394 10:43:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.394 10:43:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:56.394 10:43:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:56.394 10:43:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:56.394 00:14:56.654 10:43:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:56.654 10:43:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:56.654 10:43:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.654 10:43:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.654 10:43:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.654 10:43:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.654 10:43:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.654 10:43:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.654 10:43:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:56.654 { 00:14:56.654 "cntlid": 117, 00:14:56.654 "qid": 0, 00:14:56.654 "state": "enabled", 00:14:56.654 "thread": "nvmf_tgt_poll_group_000", 00:14:56.654 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:56.654 "listen_address": { 00:14:56.654 "trtype": "RDMA", 00:14:56.654 "adrfam": "IPv4", 00:14:56.654 "traddr": "192.168.100.8", 00:14:56.654 "trsvcid": "4420" 00:14:56.654 }, 00:14:56.654 "peer_address": { 00:14:56.654 "trtype": "RDMA", 00:14:56.654 "adrfam": "IPv4", 00:14:56.654 "traddr": "192.168.100.8", 00:14:56.654 "trsvcid": "43544" 00:14:56.654 }, 00:14:56.654 "auth": { 00:14:56.654 "state": "completed", 00:14:56.654 "digest": "sha512", 00:14:56.654 "dhgroup": "ffdhe3072" 00:14:56.654 } 00:14:56.654 } 00:14:56.654 ]' 00:14:56.654 10:43:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:56.913 10:43:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:56.913 10:43:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:56.913 10:43:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:56.913 10:43:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:56.913 10:43:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.913 10:43:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.913 10:43:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:57.173 10:43:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2NhZmY1Nzc4Yzc4Yzk5MTIxYWMwYTNmYTA4ZTUxNzkyNzY0NDk4NDM3MTNhNGUxvR/3DQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjOGVlZTlhMGJiY2M4YTA4NTcxYzEyNDFlNmUxZjDW2pwW: 00:14:57.173 10:43:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Y2NhZmY1Nzc4Yzc4Yzk5MTIxYWMwYTNmYTA4ZTUxNzkyNzY0NDk4NDM3MTNhNGUxvR/3DQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjOGVlZTlhMGJiY2M4YTA4NTcxYzEyNDFlNmUxZjDW2pwW: 00:14:57.741 10:43:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.741 10:43:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:57.741 10:43:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.741 10:43:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.741 10:43:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.741 10:43:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:57.741 10:43:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:57.741 10:43:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:58.001 10:43:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:14:58.001 10:43:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:58.001 10:43:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:58.001 10:43:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:58.001 10:43:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:58.001 10:43:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.001 10:43:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:14:58.001 10:43:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.001 10:43:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.001 10:43:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.001 10:43:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:58.001 10:43:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:58.001 10:43:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:58.260 00:14:58.260 10:43:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:58.260 10:43:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:58.260 10:43:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.519 10:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.519 10:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.519 10:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.519 10:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.519 10:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.519 10:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:58.519 { 00:14:58.519 "cntlid": 119, 00:14:58.519 "qid": 0, 00:14:58.519 "state": "enabled", 00:14:58.519 "thread": "nvmf_tgt_poll_group_000", 00:14:58.519 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:14:58.519 "listen_address": { 00:14:58.519 "trtype": "RDMA", 00:14:58.519 "adrfam": "IPv4", 00:14:58.519 "traddr": "192.168.100.8", 00:14:58.519 "trsvcid": "4420" 00:14:58.519 }, 00:14:58.519 "peer_address": { 00:14:58.519 "trtype": "RDMA", 00:14:58.519 "adrfam": "IPv4", 00:14:58.519 "traddr": "192.168.100.8", 00:14:58.519 "trsvcid": "40829" 00:14:58.519 }, 00:14:58.519 "auth": { 00:14:58.519 "state": "completed", 00:14:58.519 "digest": "sha512", 00:14:58.519 "dhgroup": "ffdhe3072" 00:14:58.519 } 00:14:58.519 } 00:14:58.519 ]' 00:14:58.519 10:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:58.519 10:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:58.519 10:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:58.519 10:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:58.519 10:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:58.519 10:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.519 10:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.519 10:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.778 10:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzg2Y2M4ZmI1NDEyZDU5NzE2YzllZjYyMzVmNzA0YzQxZWQ4Y2IwMzkwMGNiNzEzMWY0NDVkNGQyZGYxYWYxZQqRKoI=: 00:14:58.778 10:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:Nzg2Y2M4ZmI1NDEyZDU5NzE2YzllZjYyMzVmNzA0YzQxZWQ4Y2IwMzkwMGNiNzEzMWY0NDVkNGQyZGYxYWYxZQqRKoI=: 00:14:59.347 10:43:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.606 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:59.606 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.606 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.606 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.606 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:59.606 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:59.606 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:59.606 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:59.606 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:14:59.606 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:59.606 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:59.606 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:59.606 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:59.606 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.606 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:59.606 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.606 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.606 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.606 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:59.606 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:59.606 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:00.174 00:15:00.174 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:00.174 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:00.174 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.174 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.174 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.174 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.174 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.174 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.174 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:00.175 { 00:15:00.175 "cntlid": 121, 00:15:00.175 "qid": 0, 00:15:00.175 "state": "enabled", 00:15:00.175 "thread": "nvmf_tgt_poll_group_000", 00:15:00.175 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:00.175 "listen_address": { 00:15:00.175 "trtype": "RDMA", 00:15:00.175 "adrfam": "IPv4", 00:15:00.175 "traddr": "192.168.100.8", 00:15:00.175 "trsvcid": "4420" 00:15:00.175 }, 00:15:00.175 "peer_address": { 00:15:00.175 "trtype": "RDMA", 00:15:00.175 "adrfam": "IPv4", 00:15:00.175 "traddr": "192.168.100.8", 00:15:00.175 "trsvcid": "51690" 00:15:00.175 }, 00:15:00.175 "auth": { 00:15:00.175 "state": "completed", 00:15:00.175 "digest": "sha512", 00:15:00.175 "dhgroup": "ffdhe4096" 00:15:00.175 } 00:15:00.175 } 00:15:00.175 ]' 00:15:00.175 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:00.175 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:00.175 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:00.434 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:00.434 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:00.434 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.434 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.434 10:43:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.692 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBkNDIwMDJlNmM2YjRkOGMzZGZiOTc4NGNjZDc3ZDY5MWI4MWM2ZDQ4ZjhkOWZleMhgjw==: --dhchap-ctrl-secret DHHC-1:03:YjJhM2RjMjJhZTI3MDZlZjFjZjE0MmIzYTg3NzJiMDA0NDcyYjI0MTg4N2UwYzkwZWY3ZDMzMTA1NTY2YmQ2ZmCD1ME=: 00:15:00.692 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MjBkNDIwMDJlNmM2YjRkOGMzZGZiOTc4NGNjZDc3ZDY5MWI4MWM2ZDQ4ZjhkOWZleMhgjw==: --dhchap-ctrl-secret DHHC-1:03:YjJhM2RjMjJhZTI3MDZlZjFjZjE0MmIzYTg3NzJiMDA0NDcyYjI0MTg4N2UwYzkwZWY3ZDMzMTA1NTY2YmQ2ZmCD1ME=: 00:15:01.261 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.261 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:01.261 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.261 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.261 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.261 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:01.261 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:01.261 10:43:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:01.520 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:15:01.520 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:01.520 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:01.520 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:01.520 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:01.520 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.520 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:01.520 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.520 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.520 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.520 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:01.520 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:01.520 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:01.779 00:15:01.779 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:01.779 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:01.780 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:02.039 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.039 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.039 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.039 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.039 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.039 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:02.039 { 00:15:02.039 "cntlid": 123, 00:15:02.039 "qid": 0, 00:15:02.039 "state": "enabled", 00:15:02.039 "thread": "nvmf_tgt_poll_group_000", 00:15:02.039 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:02.039 "listen_address": { 00:15:02.039 "trtype": "RDMA", 00:15:02.039 "adrfam": "IPv4", 00:15:02.039 "traddr": "192.168.100.8", 00:15:02.039 "trsvcid": "4420" 00:15:02.039 }, 00:15:02.039 "peer_address": { 00:15:02.039 "trtype": "RDMA", 00:15:02.039 "adrfam": "IPv4", 00:15:02.039 "traddr": "192.168.100.8", 00:15:02.039 "trsvcid": "45365" 00:15:02.039 }, 00:15:02.039 "auth": { 00:15:02.039 "state": "completed", 00:15:02.039 "digest": "sha512", 00:15:02.039 "dhgroup": "ffdhe4096" 00:15:02.039 } 00:15:02.039 } 00:15:02.039 ]' 00:15:02.039 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:02.039 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:02.039 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:02.039 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:02.039 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:02.039 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.039 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.039 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.298 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjU5YWE3OWMyNGYxMDg2YjNlYzM3OGYxODIwMjJiNmKZn+Vz: --dhchap-ctrl-secret DHHC-1:02:NWU0MThjMzc4NjM5MDc3MjllMThkM2EwMTg2YjVmZWQxMTRmNGVhMjhkOGY1NzE0Y8QjJw==: 00:15:02.298 10:43:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NjU5YWE3OWMyNGYxMDg2YjNlYzM3OGYxODIwMjJiNmKZn+Vz: --dhchap-ctrl-secret DHHC-1:02:NWU0MThjMzc4NjM5MDc3MjllMThkM2EwMTg2YjVmZWQxMTRmNGVhMjhkOGY1NzE0Y8QjJw==: 00:15:02.866 10:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.126 10:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:03.126 10:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.126 10:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.126 10:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.126 10:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:03.126 10:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:03.126 10:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:03.126 10:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:15:03.126 10:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:03.126 10:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:03.126 10:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:03.126 10:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:03.126 10:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.126 10:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.126 10:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.126 10:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.126 10:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.126 10:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.126 10:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.126 10:43:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.385 00:15:03.385 10:43:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:03.385 10:43:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:03.385 10:43:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.645 10:43:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.645 10:43:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.645 10:43:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.645 10:43:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.645 10:43:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.645 10:43:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:03.645 { 00:15:03.645 "cntlid": 125, 00:15:03.645 "qid": 0, 00:15:03.645 "state": "enabled", 00:15:03.645 "thread": "nvmf_tgt_poll_group_000", 00:15:03.645 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:03.645 "listen_address": { 00:15:03.645 "trtype": "RDMA", 00:15:03.645 "adrfam": "IPv4", 00:15:03.645 "traddr": "192.168.100.8", 00:15:03.645 "trsvcid": "4420" 00:15:03.645 }, 00:15:03.645 "peer_address": { 00:15:03.645 "trtype": "RDMA", 00:15:03.645 "adrfam": "IPv4", 00:15:03.645 "traddr": "192.168.100.8", 00:15:03.645 "trsvcid": "48932" 00:15:03.645 }, 00:15:03.645 "auth": { 00:15:03.645 "state": "completed", 00:15:03.645 "digest": "sha512", 00:15:03.645 "dhgroup": "ffdhe4096" 00:15:03.645 } 00:15:03.645 } 00:15:03.645 ]' 00:15:03.645 10:43:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:03.645 10:43:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:03.645 10:43:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:03.645 10:43:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:03.645 10:43:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:03.904 10:43:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.904 10:43:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.904 10:43:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.904 10:43:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2NhZmY1Nzc4Yzc4Yzk5MTIxYWMwYTNmYTA4ZTUxNzkyNzY0NDk4NDM3MTNhNGUxvR/3DQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjOGVlZTlhMGJiY2M4YTA4NTcxYzEyNDFlNmUxZjDW2pwW: 00:15:03.904 10:43:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Y2NhZmY1Nzc4Yzc4Yzk5MTIxYWMwYTNmYTA4ZTUxNzkyNzY0NDk4NDM3MTNhNGUxvR/3DQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjOGVlZTlhMGJiY2M4YTA4NTcxYzEyNDFlNmUxZjDW2pwW: 00:15:04.842 10:43:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.842 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.842 10:43:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:04.842 10:43:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.842 10:43:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.842 10:43:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.842 10:43:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:04.842 10:43:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:04.842 10:43:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:04.842 10:43:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:15:04.842 10:43:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:04.842 10:43:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:04.842 10:43:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:04.842 10:43:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:04.842 10:43:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.842 10:43:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:15:04.842 10:43:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.842 10:43:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.842 10:43:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.842 10:43:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:04.842 10:43:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:04.842 10:43:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:05.102 00:15:05.102 10:43:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:05.102 10:43:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:05.102 10:43:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.361 10:43:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.361 10:43:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.361 10:43:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.361 10:43:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.361 10:43:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.361 10:43:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:05.361 { 00:15:05.361 "cntlid": 127, 00:15:05.361 "qid": 0, 00:15:05.361 "state": "enabled", 00:15:05.361 "thread": "nvmf_tgt_poll_group_000", 00:15:05.361 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:05.361 "listen_address": { 00:15:05.361 "trtype": "RDMA", 00:15:05.361 "adrfam": "IPv4", 00:15:05.361 "traddr": "192.168.100.8", 00:15:05.361 "trsvcid": "4420" 00:15:05.361 }, 00:15:05.361 "peer_address": { 00:15:05.361 "trtype": "RDMA", 00:15:05.361 "adrfam": "IPv4", 00:15:05.361 "traddr": "192.168.100.8", 00:15:05.361 "trsvcid": "51872" 00:15:05.361 }, 00:15:05.361 "auth": { 00:15:05.361 "state": "completed", 00:15:05.361 "digest": "sha512", 00:15:05.361 "dhgroup": "ffdhe4096" 00:15:05.361 } 00:15:05.361 } 00:15:05.361 ]' 00:15:05.361 10:43:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:05.361 10:43:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:05.361 10:43:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:05.620 10:43:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:05.620 10:43:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:05.620 10:43:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.620 10:43:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.620 10:43:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.620 10:43:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzg2Y2M4ZmI1NDEyZDU5NzE2YzllZjYyMzVmNzA0YzQxZWQ4Y2IwMzkwMGNiNzEzMWY0NDVkNGQyZGYxYWYxZQqRKoI=: 00:15:05.620 10:43:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:Nzg2Y2M4ZmI1NDEyZDU5NzE2YzllZjYyMzVmNzA0YzQxZWQ4Y2IwMzkwMGNiNzEzMWY0NDVkNGQyZGYxYWYxZQqRKoI=: 00:15:06.558 10:43:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.558 10:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:06.558 10:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.558 10:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.558 10:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.558 10:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:06.558 10:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:06.558 10:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:06.558 10:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:06.558 10:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:15:06.558 10:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:06.558 10:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:06.558 10:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:06.558 10:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:06.558 10:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.558 10:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:06.558 10:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.558 10:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.558 10:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.558 10:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:06.558 10:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:06.558 10:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:07.127 00:15:07.127 10:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:07.127 10:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:07.127 10:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.127 10:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.127 10:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.127 10:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.127 10:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.127 10:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.127 10:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:07.127 { 00:15:07.127 "cntlid": 129, 00:15:07.127 "qid": 0, 00:15:07.127 "state": "enabled", 00:15:07.127 "thread": "nvmf_tgt_poll_group_000", 00:15:07.127 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:07.127 "listen_address": { 00:15:07.127 "trtype": "RDMA", 00:15:07.127 "adrfam": "IPv4", 00:15:07.127 "traddr": "192.168.100.8", 00:15:07.127 "trsvcid": "4420" 00:15:07.127 }, 00:15:07.127 "peer_address": { 00:15:07.127 "trtype": "RDMA", 00:15:07.127 "adrfam": "IPv4", 00:15:07.127 "traddr": "192.168.100.8", 00:15:07.127 "trsvcid": "46014" 00:15:07.127 }, 00:15:07.127 "auth": { 00:15:07.127 "state": "completed", 00:15:07.127 "digest": "sha512", 00:15:07.127 "dhgroup": "ffdhe6144" 00:15:07.127 } 00:15:07.127 } 00:15:07.127 ]' 00:15:07.127 10:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:07.387 10:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:07.387 10:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:07.387 10:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:07.387 10:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:07.387 10:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.387 10:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.387 10:43:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.649 10:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBkNDIwMDJlNmM2YjRkOGMzZGZiOTc4NGNjZDc3ZDY5MWI4MWM2ZDQ4ZjhkOWZleMhgjw==: --dhchap-ctrl-secret DHHC-1:03:YjJhM2RjMjJhZTI3MDZlZjFjZjE0MmIzYTg3NzJiMDA0NDcyYjI0MTg4N2UwYzkwZWY3ZDMzMTA1NTY2YmQ2ZmCD1ME=: 00:15:07.649 10:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MjBkNDIwMDJlNmM2YjRkOGMzZGZiOTc4NGNjZDc3ZDY5MWI4MWM2ZDQ4ZjhkOWZleMhgjw==: --dhchap-ctrl-secret DHHC-1:03:YjJhM2RjMjJhZTI3MDZlZjFjZjE0MmIzYTg3NzJiMDA0NDcyYjI0MTg4N2UwYzkwZWY3ZDMzMTA1NTY2YmQ2ZmCD1ME=: 00:15:08.288 10:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.288 10:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:08.288 10:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.288 10:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.288 10:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.288 10:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:08.288 10:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:08.288 10:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:08.547 10:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:15:08.548 10:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:08.548 10:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:08.548 10:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:08.548 10:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:08.548 10:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.548 10:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.548 10:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.548 10:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.548 10:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.548 10:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.548 10:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.548 10:43:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.807 00:15:08.807 10:43:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:08.807 10:43:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:08.807 10:43:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.066 10:43:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.066 10:43:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.066 10:43:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.066 10:43:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.067 10:43:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.067 10:43:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:09.067 { 00:15:09.067 "cntlid": 131, 00:15:09.067 "qid": 0, 00:15:09.067 "state": "enabled", 00:15:09.067 "thread": "nvmf_tgt_poll_group_000", 00:15:09.067 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:09.067 "listen_address": { 00:15:09.067 "trtype": "RDMA", 00:15:09.067 "adrfam": "IPv4", 00:15:09.067 "traddr": "192.168.100.8", 00:15:09.067 "trsvcid": "4420" 00:15:09.067 }, 00:15:09.067 "peer_address": { 00:15:09.067 "trtype": "RDMA", 00:15:09.067 "adrfam": "IPv4", 00:15:09.067 "traddr": "192.168.100.8", 00:15:09.067 "trsvcid": "45173" 00:15:09.067 }, 00:15:09.067 "auth": { 00:15:09.067 "state": "completed", 00:15:09.067 "digest": "sha512", 00:15:09.067 "dhgroup": "ffdhe6144" 00:15:09.067 } 00:15:09.067 } 00:15:09.067 ]' 00:15:09.067 10:43:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:09.067 10:43:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:09.067 10:43:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:09.067 10:43:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:09.067 10:43:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:09.067 10:43:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.067 10:43:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.067 10:43:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.326 10:43:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjU5YWE3OWMyNGYxMDg2YjNlYzM3OGYxODIwMjJiNmKZn+Vz: --dhchap-ctrl-secret DHHC-1:02:NWU0MThjMzc4NjM5MDc3MjllMThkM2EwMTg2YjVmZWQxMTRmNGVhMjhkOGY1NzE0Y8QjJw==: 00:15:09.326 10:43:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NjU5YWE3OWMyNGYxMDg2YjNlYzM3OGYxODIwMjJiNmKZn+Vz: --dhchap-ctrl-secret DHHC-1:02:NWU0MThjMzc4NjM5MDc3MjllMThkM2EwMTg2YjVmZWQxMTRmNGVhMjhkOGY1NzE0Y8QjJw==: 00:15:09.895 10:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.154 10:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:10.154 10:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.154 10:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.154 10:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.154 10:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:10.154 10:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:10.154 10:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:10.154 10:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:15:10.154 10:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:10.154 10:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:10.154 10:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:10.154 10:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:10.154 10:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.154 10:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:10.154 10:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.154 10:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.413 10:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.413 10:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:10.413 10:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:10.413 10:43:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:10.673 00:15:10.673 10:43:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:10.673 10:43:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.673 10:43:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:10.932 10:43:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.932 10:43:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.932 10:43:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.932 10:43:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.932 10:43:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.932 10:43:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:10.932 { 00:15:10.932 "cntlid": 133, 00:15:10.932 "qid": 0, 00:15:10.932 "state": "enabled", 00:15:10.932 "thread": "nvmf_tgt_poll_group_000", 00:15:10.932 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:10.932 "listen_address": { 00:15:10.932 "trtype": "RDMA", 00:15:10.932 "adrfam": "IPv4", 00:15:10.932 "traddr": "192.168.100.8", 00:15:10.932 "trsvcid": "4420" 00:15:10.932 }, 00:15:10.932 "peer_address": { 00:15:10.932 "trtype": "RDMA", 00:15:10.932 "adrfam": "IPv4", 00:15:10.932 "traddr": "192.168.100.8", 00:15:10.932 "trsvcid": "38916" 00:15:10.932 }, 00:15:10.932 "auth": { 00:15:10.932 "state": "completed", 00:15:10.932 "digest": "sha512", 00:15:10.932 "dhgroup": "ffdhe6144" 00:15:10.932 } 00:15:10.932 } 00:15:10.932 ]' 00:15:10.932 10:43:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:10.932 10:43:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:10.932 10:43:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:10.932 10:43:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:10.932 10:43:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:10.932 10:43:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.932 10:43:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.932 10:43:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.191 10:43:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2NhZmY1Nzc4Yzc4Yzk5MTIxYWMwYTNmYTA4ZTUxNzkyNzY0NDk4NDM3MTNhNGUxvR/3DQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjOGVlZTlhMGJiY2M4YTA4NTcxYzEyNDFlNmUxZjDW2pwW: 00:15:11.191 10:43:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Y2NhZmY1Nzc4Yzc4Yzk5MTIxYWMwYTNmYTA4ZTUxNzkyNzY0NDk4NDM3MTNhNGUxvR/3DQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjOGVlZTlhMGJiY2M4YTA4NTcxYzEyNDFlNmUxZjDW2pwW: 00:15:11.760 10:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.760 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.760 10:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:11.760 10:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.760 10:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.019 10:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.019 10:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:12.019 10:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:12.019 10:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:12.019 10:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:15:12.019 10:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:12.019 10:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:12.019 10:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:12.019 10:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:12.019 10:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.019 10:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:15:12.019 10:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.019 10:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.019 10:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.019 10:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:12.019 10:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:12.019 10:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:12.587 00:15:12.587 10:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:12.587 10:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:12.587 10:43:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.587 10:43:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.587 10:43:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.587 10:43:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.587 10:43:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.587 10:43:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.587 10:43:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:12.587 { 00:15:12.587 "cntlid": 135, 00:15:12.587 "qid": 0, 00:15:12.587 "state": "enabled", 00:15:12.587 "thread": "nvmf_tgt_poll_group_000", 00:15:12.587 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:12.587 "listen_address": { 00:15:12.587 "trtype": "RDMA", 00:15:12.587 "adrfam": "IPv4", 00:15:12.587 "traddr": "192.168.100.8", 00:15:12.587 "trsvcid": "4420" 00:15:12.587 }, 00:15:12.587 "peer_address": { 00:15:12.587 "trtype": "RDMA", 00:15:12.587 "adrfam": "IPv4", 00:15:12.587 "traddr": "192.168.100.8", 00:15:12.587 "trsvcid": "52253" 00:15:12.587 }, 00:15:12.587 "auth": { 00:15:12.587 "state": "completed", 00:15:12.587 "digest": "sha512", 00:15:12.587 "dhgroup": "ffdhe6144" 00:15:12.587 } 00:15:12.587 } 00:15:12.587 ]' 00:15:12.587 10:43:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:12.587 10:43:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:12.587 10:43:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:12.846 10:43:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:12.846 10:43:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:12.846 10:43:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.846 10:43:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.846 10:43:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.104 10:43:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzg2Y2M4ZmI1NDEyZDU5NzE2YzllZjYyMzVmNzA0YzQxZWQ4Y2IwMzkwMGNiNzEzMWY0NDVkNGQyZGYxYWYxZQqRKoI=: 00:15:13.104 10:43:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:Nzg2Y2M4ZmI1NDEyZDU5NzE2YzllZjYyMzVmNzA0YzQxZWQ4Y2IwMzkwMGNiNzEzMWY0NDVkNGQyZGYxYWYxZQqRKoI=: 00:15:13.672 10:43:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.672 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.672 10:43:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:13.672 10:43:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.672 10:43:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.672 10:43:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.672 10:43:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:13.672 10:43:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:13.672 10:43:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:13.672 10:43:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:13.931 10:43:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:15:13.931 10:43:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:13.931 10:43:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:13.931 10:43:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:13.931 10:43:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:13.931 10:43:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.931 10:43:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.931 10:43:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.931 10:43:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.931 10:43:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.931 10:43:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.931 10:43:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.931 10:43:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.500 00:15:14.500 10:43:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:14.500 10:43:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:14.500 10:43:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.759 10:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.759 10:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.759 10:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.759 10:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.759 10:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.759 10:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:14.759 { 00:15:14.759 "cntlid": 137, 00:15:14.759 "qid": 0, 00:15:14.759 "state": "enabled", 00:15:14.759 "thread": "nvmf_tgt_poll_group_000", 00:15:14.759 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:14.759 "listen_address": { 00:15:14.759 "trtype": "RDMA", 00:15:14.759 "adrfam": "IPv4", 00:15:14.759 "traddr": "192.168.100.8", 00:15:14.759 "trsvcid": "4420" 00:15:14.759 }, 00:15:14.759 "peer_address": { 00:15:14.759 "trtype": "RDMA", 00:15:14.759 "adrfam": "IPv4", 00:15:14.759 "traddr": "192.168.100.8", 00:15:14.759 "trsvcid": "49029" 00:15:14.759 }, 00:15:14.759 "auth": { 00:15:14.759 "state": "completed", 00:15:14.759 "digest": "sha512", 00:15:14.759 "dhgroup": "ffdhe8192" 00:15:14.759 } 00:15:14.759 } 00:15:14.759 ]' 00:15:14.759 10:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:14.759 10:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:14.759 10:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:14.759 10:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:14.759 10:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:14.759 10:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.759 10:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.759 10:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.018 10:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBkNDIwMDJlNmM2YjRkOGMzZGZiOTc4NGNjZDc3ZDY5MWI4MWM2ZDQ4ZjhkOWZleMhgjw==: --dhchap-ctrl-secret DHHC-1:03:YjJhM2RjMjJhZTI3MDZlZjFjZjE0MmIzYTg3NzJiMDA0NDcyYjI0MTg4N2UwYzkwZWY3ZDMzMTA1NTY2YmQ2ZmCD1ME=: 00:15:15.018 10:43:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MjBkNDIwMDJlNmM2YjRkOGMzZGZiOTc4NGNjZDc3ZDY5MWI4MWM2ZDQ4ZjhkOWZleMhgjw==: --dhchap-ctrl-secret DHHC-1:03:YjJhM2RjMjJhZTI3MDZlZjFjZjE0MmIzYTg3NzJiMDA0NDcyYjI0MTg4N2UwYzkwZWY3ZDMzMTA1NTY2YmQ2ZmCD1ME=: 00:15:15.585 10:43:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.585 10:43:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:15.585 10:43:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.585 10:43:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.585 10:43:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.585 10:43:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:15.585 10:43:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:15.585 10:43:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:15.845 10:43:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:15:15.845 10:43:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:15.845 10:43:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:15.845 10:43:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:15.845 10:43:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:15.845 10:43:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.845 10:43:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.845 10:43:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.845 10:43:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.845 10:43:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.845 10:43:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.845 10:43:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.845 10:43:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.413 00:15:16.413 10:43:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:16.413 10:43:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.413 10:43:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:16.672 10:43:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.672 10:43:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.672 10:43:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.672 10:43:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.672 10:43:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.672 10:43:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:16.672 { 00:15:16.672 "cntlid": 139, 00:15:16.672 "qid": 0, 00:15:16.672 "state": "enabled", 00:15:16.672 "thread": "nvmf_tgt_poll_group_000", 00:15:16.672 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:16.672 "listen_address": { 00:15:16.672 "trtype": "RDMA", 00:15:16.672 "adrfam": "IPv4", 00:15:16.672 "traddr": "192.168.100.8", 00:15:16.672 "trsvcid": "4420" 00:15:16.672 }, 00:15:16.672 "peer_address": { 00:15:16.672 "trtype": "RDMA", 00:15:16.672 "adrfam": "IPv4", 00:15:16.672 "traddr": "192.168.100.8", 00:15:16.672 "trsvcid": "51232" 00:15:16.672 }, 00:15:16.672 "auth": { 00:15:16.672 "state": "completed", 00:15:16.672 "digest": "sha512", 00:15:16.672 "dhgroup": "ffdhe8192" 00:15:16.672 } 00:15:16.672 } 00:15:16.672 ]' 00:15:16.672 10:43:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:16.672 10:43:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:16.672 10:43:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:16.672 10:43:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:16.672 10:43:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:16.672 10:43:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.672 10:43:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.672 10:43:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.931 10:43:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjU5YWE3OWMyNGYxMDg2YjNlYzM3OGYxODIwMjJiNmKZn+Vz: --dhchap-ctrl-secret DHHC-1:02:NWU0MThjMzc4NjM5MDc3MjllMThkM2EwMTg2YjVmZWQxMTRmNGVhMjhkOGY1NzE0Y8QjJw==: 00:15:16.931 10:43:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NjU5YWE3OWMyNGYxMDg2YjNlYzM3OGYxODIwMjJiNmKZn+Vz: --dhchap-ctrl-secret DHHC-1:02:NWU0MThjMzc4NjM5MDc3MjllMThkM2EwMTg2YjVmZWQxMTRmNGVhMjhkOGY1NzE0Y8QjJw==: 00:15:17.499 10:43:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.758 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.758 10:43:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:17.758 10:43:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.758 10:43:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.758 10:43:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.758 10:43:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:17.758 10:43:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:17.758 10:43:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:17.758 10:43:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:15:17.758 10:43:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:17.758 10:43:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:17.758 10:43:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:17.758 10:43:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:17.758 10:43:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.758 10:43:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.758 10:43:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.758 10:43:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.758 10:43:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.758 10:43:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.758 10:43:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.758 10:43:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:18.325 00:15:18.325 10:43:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:18.325 10:43:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:18.326 10:43:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.584 10:43:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.584 10:43:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.584 10:43:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.584 10:43:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.584 10:43:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.584 10:43:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:18.584 { 00:15:18.584 "cntlid": 141, 00:15:18.584 "qid": 0, 00:15:18.584 "state": "enabled", 00:15:18.584 "thread": "nvmf_tgt_poll_group_000", 00:15:18.584 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:18.584 "listen_address": { 00:15:18.584 "trtype": "RDMA", 00:15:18.584 "adrfam": "IPv4", 00:15:18.584 "traddr": "192.168.100.8", 00:15:18.584 "trsvcid": "4420" 00:15:18.584 }, 00:15:18.584 "peer_address": { 00:15:18.584 "trtype": "RDMA", 00:15:18.584 "adrfam": "IPv4", 00:15:18.584 "traddr": "192.168.100.8", 00:15:18.584 "trsvcid": "35113" 00:15:18.584 }, 00:15:18.584 "auth": { 00:15:18.584 "state": "completed", 00:15:18.584 "digest": "sha512", 00:15:18.584 "dhgroup": "ffdhe8192" 00:15:18.584 } 00:15:18.584 } 00:15:18.584 ]' 00:15:18.584 10:43:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:18.584 10:43:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:18.584 10:43:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:18.584 10:43:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:18.584 10:43:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:18.584 10:43:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.584 10:43:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.584 10:43:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.843 10:43:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2NhZmY1Nzc4Yzc4Yzk5MTIxYWMwYTNmYTA4ZTUxNzkyNzY0NDk4NDM3MTNhNGUxvR/3DQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjOGVlZTlhMGJiY2M4YTA4NTcxYzEyNDFlNmUxZjDW2pwW: 00:15:18.843 10:43:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Y2NhZmY1Nzc4Yzc4Yzk5MTIxYWMwYTNmYTA4ZTUxNzkyNzY0NDk4NDM3MTNhNGUxvR/3DQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjOGVlZTlhMGJiY2M4YTA4NTcxYzEyNDFlNmUxZjDW2pwW: 00:15:19.410 10:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.669 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.669 10:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:19.669 10:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.669 10:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.669 10:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.669 10:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:19.669 10:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:19.669 10:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:19.669 10:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:15:19.669 10:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:19.669 10:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:19.669 10:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:19.669 10:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:19.669 10:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.669 10:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:15:19.669 10:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.669 10:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.928 10:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.928 10:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:19.928 10:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:19.928 10:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:20.187 00:15:20.187 10:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:20.187 10:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:20.187 10:43:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.445 10:43:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.445 10:43:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.445 10:43:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.445 10:43:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.446 10:43:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.446 10:43:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:20.446 { 00:15:20.446 "cntlid": 143, 00:15:20.446 "qid": 0, 00:15:20.446 "state": "enabled", 00:15:20.446 "thread": "nvmf_tgt_poll_group_000", 00:15:20.446 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:20.446 "listen_address": { 00:15:20.446 "trtype": "RDMA", 00:15:20.446 "adrfam": "IPv4", 00:15:20.446 "traddr": "192.168.100.8", 00:15:20.446 "trsvcid": "4420" 00:15:20.446 }, 00:15:20.446 "peer_address": { 00:15:20.446 "trtype": "RDMA", 00:15:20.446 "adrfam": "IPv4", 00:15:20.446 "traddr": "192.168.100.8", 00:15:20.446 "trsvcid": "49126" 00:15:20.446 }, 00:15:20.446 "auth": { 00:15:20.446 "state": "completed", 00:15:20.446 "digest": "sha512", 00:15:20.446 "dhgroup": "ffdhe8192" 00:15:20.446 } 00:15:20.446 } 00:15:20.446 ]' 00:15:20.446 10:43:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:20.446 10:43:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:20.446 10:43:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:20.446 10:43:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:20.446 10:43:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:20.704 10:43:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.704 10:43:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.704 10:43:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.704 10:43:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzg2Y2M4ZmI1NDEyZDU5NzE2YzllZjYyMzVmNzA0YzQxZWQ4Y2IwMzkwMGNiNzEzMWY0NDVkNGQyZGYxYWYxZQqRKoI=: 00:15:20.704 10:43:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:Nzg2Y2M4ZmI1NDEyZDU5NzE2YzllZjYyMzVmNzA0YzQxZWQ4Y2IwMzkwMGNiNzEzMWY0NDVkNGQyZGYxYWYxZQqRKoI=: 00:15:21.641 10:43:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.641 10:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:21.641 10:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.641 10:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.641 10:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.641 10:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:15:21.641 10:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:15:21.641 10:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:15:21.641 10:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:21.641 10:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:21.641 10:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:21.641 10:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:15:21.641 10:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:21.641 10:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:21.641 10:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:21.641 10:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:21.641 10:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.641 10:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.641 10:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.641 10:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.641 10:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.641 10:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.641 10:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.641 10:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.205 00:15:22.205 10:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:22.205 10:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:22.205 10:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.462 10:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.462 10:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.462 10:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.462 10:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.462 10:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.462 10:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:22.462 { 00:15:22.462 "cntlid": 145, 00:15:22.462 "qid": 0, 00:15:22.462 "state": "enabled", 00:15:22.462 "thread": "nvmf_tgt_poll_group_000", 00:15:22.462 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:22.462 "listen_address": { 00:15:22.462 "trtype": "RDMA", 00:15:22.462 "adrfam": "IPv4", 00:15:22.462 "traddr": "192.168.100.8", 00:15:22.462 "trsvcid": "4420" 00:15:22.462 }, 00:15:22.462 "peer_address": { 00:15:22.462 "trtype": "RDMA", 00:15:22.462 "adrfam": "IPv4", 00:15:22.462 "traddr": "192.168.100.8", 00:15:22.462 "trsvcid": "35090" 00:15:22.462 }, 00:15:22.462 "auth": { 00:15:22.462 "state": "completed", 00:15:22.462 "digest": "sha512", 00:15:22.462 "dhgroup": "ffdhe8192" 00:15:22.462 } 00:15:22.462 } 00:15:22.462 ]' 00:15:22.462 10:43:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:22.462 10:43:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:22.462 10:43:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:22.463 10:43:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:22.463 10:43:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:22.463 10:43:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.463 10:43:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.463 10:43:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.721 10:43:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBkNDIwMDJlNmM2YjRkOGMzZGZiOTc4NGNjZDc3ZDY5MWI4MWM2ZDQ4ZjhkOWZleMhgjw==: --dhchap-ctrl-secret DHHC-1:03:YjJhM2RjMjJhZTI3MDZlZjFjZjE0MmIzYTg3NzJiMDA0NDcyYjI0MTg4N2UwYzkwZWY3ZDMzMTA1NTY2YmQ2ZmCD1ME=: 00:15:22.721 10:43:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:MjBkNDIwMDJlNmM2YjRkOGMzZGZiOTc4NGNjZDc3ZDY5MWI4MWM2ZDQ4ZjhkOWZleMhgjw==: --dhchap-ctrl-secret DHHC-1:03:YjJhM2RjMjJhZTI3MDZlZjFjZjE0MmIzYTg3NzJiMDA0NDcyYjI0MTg4N2UwYzkwZWY3ZDMzMTA1NTY2YmQ2ZmCD1ME=: 00:15:23.288 10:43:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.547 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.547 10:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:23.547 10:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.547 10:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.547 10:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.547 10:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 00:15:23.547 10:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.547 10:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.547 10:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.547 10:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:15:23.547 10:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:23.547 10:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:15:23.547 10:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:15:23.547 10:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:23.547 10:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:15:23.547 10:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:23.547 10:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:15:23.547 10:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:15:23.547 10:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:15:24.115 request: 00:15:24.115 { 00:15:24.115 "name": "nvme0", 00:15:24.115 "trtype": "rdma", 00:15:24.115 "traddr": "192.168.100.8", 00:15:24.115 "adrfam": "ipv4", 00:15:24.115 "trsvcid": "4420", 00:15:24.115 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:24.115 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:24.115 "prchk_reftag": false, 00:15:24.115 "prchk_guard": false, 00:15:24.115 "hdgst": false, 00:15:24.115 "ddgst": false, 00:15:24.115 "dhchap_key": "key2", 00:15:24.115 "allow_unrecognized_csi": false, 00:15:24.115 "method": "bdev_nvme_attach_controller", 00:15:24.115 "req_id": 1 00:15:24.115 } 00:15:24.115 Got JSON-RPC error response 00:15:24.115 response: 00:15:24.115 { 00:15:24.115 "code": -5, 00:15:24.115 "message": "Input/output error" 00:15:24.115 } 00:15:24.115 10:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:24.115 10:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:24.115 10:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:24.115 10:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:24.115 10:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:24.115 10:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.115 10:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.115 10:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.115 10:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.115 10:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.115 10:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.115 10:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.115 10:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:24.115 10:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:24.115 10:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:24.115 10:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:15:24.115 10:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:24.115 10:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:15:24.115 10:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:24.115 10:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:24.115 10:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:24.115 10:43:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:24.374 request: 00:15:24.374 { 00:15:24.374 "name": "nvme0", 00:15:24.374 "trtype": "rdma", 00:15:24.374 "traddr": "192.168.100.8", 00:15:24.374 "adrfam": "ipv4", 00:15:24.374 "trsvcid": "4420", 00:15:24.374 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:24.374 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:24.374 "prchk_reftag": false, 00:15:24.374 "prchk_guard": false, 00:15:24.374 "hdgst": false, 00:15:24.374 "ddgst": false, 00:15:24.374 "dhchap_key": "key1", 00:15:24.374 "dhchap_ctrlr_key": "ckey2", 00:15:24.374 "allow_unrecognized_csi": false, 00:15:24.374 "method": "bdev_nvme_attach_controller", 00:15:24.374 "req_id": 1 00:15:24.374 } 00:15:24.374 Got JSON-RPC error response 00:15:24.374 response: 00:15:24.374 { 00:15:24.374 "code": -5, 00:15:24.374 "message": "Input/output error" 00:15:24.374 } 00:15:24.632 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:24.632 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:24.632 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:24.632 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:24.632 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:24.632 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.632 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.632 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.633 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 00:15:24.633 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.633 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.633 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.633 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.633 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:24.633 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.633 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:15:24.633 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:24.633 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:15:24.633 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:24.633 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.633 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.633 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.892 request: 00:15:24.892 { 00:15:24.892 "name": "nvme0", 00:15:24.892 "trtype": "rdma", 00:15:24.892 "traddr": "192.168.100.8", 00:15:24.892 "adrfam": "ipv4", 00:15:24.892 "trsvcid": "4420", 00:15:24.892 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:24.892 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:24.892 "prchk_reftag": false, 00:15:24.892 "prchk_guard": false, 00:15:24.892 "hdgst": false, 00:15:24.892 "ddgst": false, 00:15:24.892 "dhchap_key": "key1", 00:15:24.892 "dhchap_ctrlr_key": "ckey1", 00:15:24.892 "allow_unrecognized_csi": false, 00:15:24.892 "method": "bdev_nvme_attach_controller", 00:15:24.892 "req_id": 1 00:15:24.892 } 00:15:24.892 Got JSON-RPC error response 00:15:24.892 response: 00:15:24.892 { 00:15:24.892 "code": -5, 00:15:24.892 "message": "Input/output error" 00:15:24.892 } 00:15:24.892 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:24.892 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:24.892 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:24.892 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:24.892 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:24.892 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.892 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.892 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.892 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3752335 00:15:24.892 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 3752335 ']' 00:15:24.892 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 3752335 00:15:24.892 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:15:24.892 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:25.151 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3752335 00:15:25.151 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:25.151 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:25.151 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3752335' 00:15:25.151 killing process with pid 3752335 00:15:25.151 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 3752335 00:15:25.151 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 3752335 00:15:25.151 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:15:25.151 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:25.151 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:25.151 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.410 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3776352 00:15:25.411 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:15:25.411 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3776352 00:15:25.411 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3776352 ']' 00:15:25.411 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.411 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:25.411 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.411 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:25.411 10:43:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.347 10:43:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:26.347 10:43:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:15:26.347 10:43:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:26.347 10:43:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:26.347 10:43:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.347 10:43:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:26.347 10:43:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:26.347 10:43:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3776352 00:15:26.347 10:43:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3776352 ']' 00:15:26.347 10:43:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.347 10:43:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:26.347 10:43:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.347 10:43:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:26.347 10:43:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.347 10:43:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:26.347 10:43:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:15:26.347 10:43:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:15:26.347 10:43:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.347 10:43:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.347 null0 00:15:26.606 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.606 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:26.606 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.s5A 00:15:26.606 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.606 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.606 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.606 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.dCV ]] 00:15:26.606 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dCV 00:15:26.606 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.606 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.606 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.606 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:26.606 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.8dc 00:15:26.606 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.606 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.606 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.606 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.IdO ]] 00:15:26.606 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IdO 00:15:26.607 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.607 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.607 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.607 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:26.607 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Yy8 00:15:26.607 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.607 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.607 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.607 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.JvP ]] 00:15:26.607 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.JvP 00:15:26.607 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.607 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.607 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.607 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:26.607 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.JXF 00:15:26.607 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.607 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.607 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.607 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:15:26.607 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:15:26.607 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:26.607 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:26.607 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:26.607 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:26.607 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.607 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:15:26.607 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.607 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.607 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.607 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:26.607 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:26.607 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:27.543 nvme0n1 00:15:27.543 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:27.543 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:27.544 10:43:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.544 10:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.544 10:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.544 10:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.544 10:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.544 10:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.544 10:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:27.544 { 00:15:27.544 "cntlid": 1, 00:15:27.544 "qid": 0, 00:15:27.544 "state": "enabled", 00:15:27.544 "thread": "nvmf_tgt_poll_group_000", 00:15:27.544 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:27.544 "listen_address": { 00:15:27.544 "trtype": "RDMA", 00:15:27.544 "adrfam": "IPv4", 00:15:27.544 "traddr": "192.168.100.8", 00:15:27.544 "trsvcid": "4420" 00:15:27.544 }, 00:15:27.544 "peer_address": { 00:15:27.544 "trtype": "RDMA", 00:15:27.544 "adrfam": "IPv4", 00:15:27.544 "traddr": "192.168.100.8", 00:15:27.544 "trsvcid": "44687" 00:15:27.544 }, 00:15:27.544 "auth": { 00:15:27.544 "state": "completed", 00:15:27.544 "digest": "sha512", 00:15:27.544 "dhgroup": "ffdhe8192" 00:15:27.544 } 00:15:27.544 } 00:15:27.544 ]' 00:15:27.544 10:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:27.544 10:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:27.544 10:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:27.544 10:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:27.544 10:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:27.803 10:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.803 10:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.803 10:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.803 10:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzg2Y2M4ZmI1NDEyZDU5NzE2YzllZjYyMzVmNzA0YzQxZWQ4Y2IwMzkwMGNiNzEzMWY0NDVkNGQyZGYxYWYxZQqRKoI=: 00:15:27.803 10:43:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:Nzg2Y2M4ZmI1NDEyZDU5NzE2YzllZjYyMzVmNzA0YzQxZWQ4Y2IwMzkwMGNiNzEzMWY0NDVkNGQyZGYxYWYxZQqRKoI=: 00:15:28.739 10:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.739 10:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:28.740 10:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.740 10:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.740 10:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.740 10:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:15:28.740 10:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.740 10:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.740 10:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.740 10:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:15:28.740 10:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:15:28.740 10:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:15:28.740 10:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:28.740 10:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:15:28.740 10:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:15:28.740 10:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:28.740 10:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:15:28.740 10:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:28.740 10:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:28.740 10:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:28.740 10:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:28.999 request: 00:15:28.999 { 00:15:28.999 "name": "nvme0", 00:15:28.999 "trtype": "rdma", 00:15:28.999 "traddr": "192.168.100.8", 00:15:28.999 "adrfam": "ipv4", 00:15:28.999 "trsvcid": "4420", 00:15:28.999 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:28.999 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:28.999 "prchk_reftag": false, 00:15:28.999 "prchk_guard": false, 00:15:28.999 "hdgst": false, 00:15:28.999 "ddgst": false, 00:15:28.999 "dhchap_key": "key3", 00:15:28.999 "allow_unrecognized_csi": false, 00:15:28.999 "method": "bdev_nvme_attach_controller", 00:15:28.999 "req_id": 1 00:15:28.999 } 00:15:28.999 Got JSON-RPC error response 00:15:28.999 response: 00:15:28.999 { 00:15:28.999 "code": -5, 00:15:28.999 "message": "Input/output error" 00:15:28.999 } 00:15:28.999 10:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:28.999 10:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:28.999 10:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:28.999 10:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:28.999 10:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:15:28.999 10:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:15:28.999 10:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:28.999 10:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:29.259 10:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:15:29.259 10:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:29.259 10:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:15:29.259 10:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:15:29.259 10:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:29.259 10:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:15:29.259 10:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:29.259 10:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:29.259 10:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:29.259 10:43:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:29.518 request: 00:15:29.518 { 00:15:29.518 "name": "nvme0", 00:15:29.518 "trtype": "rdma", 00:15:29.518 "traddr": "192.168.100.8", 00:15:29.518 "adrfam": "ipv4", 00:15:29.518 "trsvcid": "4420", 00:15:29.518 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:29.518 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:29.518 "prchk_reftag": false, 00:15:29.518 "prchk_guard": false, 00:15:29.518 "hdgst": false, 00:15:29.518 "ddgst": false, 00:15:29.518 "dhchap_key": "key3", 00:15:29.518 "allow_unrecognized_csi": false, 00:15:29.518 "method": "bdev_nvme_attach_controller", 00:15:29.518 "req_id": 1 00:15:29.518 } 00:15:29.518 Got JSON-RPC error response 00:15:29.518 response: 00:15:29.518 { 00:15:29.518 "code": -5, 00:15:29.518 "message": "Input/output error" 00:15:29.518 } 00:15:29.518 10:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:29.518 10:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:29.518 10:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:29.518 10:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:29.518 10:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:15:29.518 10:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:15:29.518 10:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:15:29.518 10:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:29.518 10:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:29.518 10:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:29.777 10:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:29.777 10:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.777 10:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.777 10:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.777 10:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:29.777 10:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.777 10:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.777 10:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.777 10:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:29.777 10:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:29.777 10:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:29.777 10:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:15:29.777 10:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:29.777 10:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:15:29.777 10:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:29.777 10:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:29.777 10:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:29.777 10:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:30.040 request: 00:15:30.040 { 00:15:30.040 "name": "nvme0", 00:15:30.040 "trtype": "rdma", 00:15:30.040 "traddr": "192.168.100.8", 00:15:30.040 "adrfam": "ipv4", 00:15:30.040 "trsvcid": "4420", 00:15:30.040 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:30.040 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:30.040 "prchk_reftag": false, 00:15:30.040 "prchk_guard": false, 00:15:30.040 "hdgst": false, 00:15:30.040 "ddgst": false, 00:15:30.040 "dhchap_key": "key0", 00:15:30.040 "dhchap_ctrlr_key": "key1", 00:15:30.040 "allow_unrecognized_csi": false, 00:15:30.040 "method": "bdev_nvme_attach_controller", 00:15:30.040 "req_id": 1 00:15:30.040 } 00:15:30.040 Got JSON-RPC error response 00:15:30.040 response: 00:15:30.040 { 00:15:30.040 "code": -5, 00:15:30.040 "message": "Input/output error" 00:15:30.040 } 00:15:30.040 10:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:30.040 10:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:30.040 10:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:30.040 10:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:30.040 10:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:15:30.040 10:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:15:30.040 10:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:15:30.333 nvme0n1 00:15:30.333 10:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:15:30.333 10:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.333 10:43:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:15:30.605 10:43:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.605 10:43:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.605 10:43:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.605 10:43:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 00:15:30.605 10:43:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.605 10:43:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.863 10:43:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.863 10:43:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:15:30.863 10:43:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:30.863 10:43:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:31.431 nvme0n1 00:15:31.431 10:43:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:15:31.431 10:43:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:15:31.431 10:43:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.690 10:43:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.690 10:43:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:31.690 10:43:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.690 10:43:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.690 10:43:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.690 10:43:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:15:31.690 10:43:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.690 10:43:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:15:31.949 10:43:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.949 10:43:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2NhZmY1Nzc4Yzc4Yzk5MTIxYWMwYTNmYTA4ZTUxNzkyNzY0NDk4NDM3MTNhNGUxvR/3DQ==: --dhchap-ctrl-secret DHHC-1:03:Nzg2Y2M4ZmI1NDEyZDU5NzE2YzllZjYyMzVmNzA0YzQxZWQ4Y2IwMzkwMGNiNzEzMWY0NDVkNGQyZGYxYWYxZQqRKoI=: 00:15:31.949 10:43:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:Y2NhZmY1Nzc4Yzc4Yzk5MTIxYWMwYTNmYTA4ZTUxNzkyNzY0NDk4NDM3MTNhNGUxvR/3DQ==: --dhchap-ctrl-secret DHHC-1:03:Nzg2Y2M4ZmI1NDEyZDU5NzE2YzllZjYyMzVmNzA0YzQxZWQ4Y2IwMzkwMGNiNzEzMWY0NDVkNGQyZGYxYWYxZQqRKoI=: 00:15:32.514 10:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:15:32.514 10:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:15:32.514 10:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:15:32.514 10:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:15:32.514 10:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:15:32.514 10:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:15:32.514 10:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:15:32.514 10:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.514 10:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.773 10:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:15:32.773 10:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:32.773 10:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:15:32.773 10:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:15:32.773 10:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:32.773 10:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:15:32.773 10:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:32.773 10:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:15:32.773 10:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:32.773 10:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:33.031 request: 00:15:33.031 { 00:15:33.031 "name": "nvme0", 00:15:33.031 "trtype": "rdma", 00:15:33.031 "traddr": "192.168.100.8", 00:15:33.031 "adrfam": "ipv4", 00:15:33.031 "trsvcid": "4420", 00:15:33.031 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:33.032 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:15:33.032 "prchk_reftag": false, 00:15:33.032 "prchk_guard": false, 00:15:33.032 "hdgst": false, 00:15:33.032 "ddgst": false, 00:15:33.032 "dhchap_key": "key1", 00:15:33.032 "allow_unrecognized_csi": false, 00:15:33.032 "method": "bdev_nvme_attach_controller", 00:15:33.032 "req_id": 1 00:15:33.032 } 00:15:33.032 Got JSON-RPC error response 00:15:33.032 response: 00:15:33.032 { 00:15:33.032 "code": -5, 00:15:33.032 "message": "Input/output error" 00:15:33.032 } 00:15:33.032 10:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:33.032 10:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:33.032 10:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:33.032 10:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:33.032 10:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:33.032 10:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:33.032 10:44:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:33.967 nvme0n1 00:15:33.967 10:44:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:15:33.967 10:44:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:15:33.967 10:44:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.967 10:44:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.967 10:44:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.967 10:44:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.225 10:44:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:34.225 10:44:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.225 10:44:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.225 10:44:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.225 10:44:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:15:34.225 10:44:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:15:34.225 10:44:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:15:34.482 nvme0n1 00:15:34.482 10:44:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:15:34.482 10:44:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:15:34.482 10:44:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.740 10:44:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.740 10:44:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.740 10:44:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.000 10:44:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:35.000 10:44:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.000 10:44:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.000 10:44:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.000 10:44:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NjU5YWE3OWMyNGYxMDg2YjNlYzM3OGYxODIwMjJiNmKZn+Vz: '' 2s 00:15:35.000 10:44:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:15:35.000 10:44:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:15:35.000 10:44:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NjU5YWE3OWMyNGYxMDg2YjNlYzM3OGYxODIwMjJiNmKZn+Vz: 00:15:35.000 10:44:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:15:35.000 10:44:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:15:35.000 10:44:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:15:35.000 10:44:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NjU5YWE3OWMyNGYxMDg2YjNlYzM3OGYxODIwMjJiNmKZn+Vz: ]] 00:15:35.000 10:44:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NjU5YWE3OWMyNGYxMDg2YjNlYzM3OGYxODIwMjJiNmKZn+Vz: 00:15:35.000 10:44:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:15:35.000 10:44:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:15:35.000 10:44:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:15:36.903 10:44:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:15:36.903 10:44:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:15:36.903 10:44:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:15:36.903 10:44:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:15:36.903 10:44:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:15:36.903 10:44:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:15:36.903 10:44:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:15:36.903 10:44:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key key2 00:15:36.903 10:44:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.903 10:44:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.903 10:44:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.903 10:44:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:Y2NhZmY1Nzc4Yzc4Yzk5MTIxYWMwYTNmYTA4ZTUxNzkyNzY0NDk4NDM3MTNhNGUxvR/3DQ==: 2s 00:15:36.903 10:44:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:15:36.903 10:44:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:15:36.903 10:44:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:15:36.903 10:44:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:Y2NhZmY1Nzc4Yzc4Yzk5MTIxYWMwYTNmYTA4ZTUxNzkyNzY0NDk4NDM3MTNhNGUxvR/3DQ==: 00:15:36.903 10:44:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:15:36.903 10:44:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:15:36.903 10:44:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:15:36.903 10:44:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:Y2NhZmY1Nzc4Yzc4Yzk5MTIxYWMwYTNmYTA4ZTUxNzkyNzY0NDk4NDM3MTNhNGUxvR/3DQ==: ]] 00:15:36.903 10:44:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:Y2NhZmY1Nzc4Yzc4Yzk5MTIxYWMwYTNmYTA4ZTUxNzkyNzY0NDk4NDM3MTNhNGUxvR/3DQ==: 00:15:36.903 10:44:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:15:36.903 10:44:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:15:39.436 10:44:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:15:39.436 10:44:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:15:39.436 10:44:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:15:39.436 10:44:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:15:39.436 10:44:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:15:39.436 10:44:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:15:39.436 10:44:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:15:39.436 10:44:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.436 10:44:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:39.436 10:44:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.436 10:44:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.436 10:44:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.436 10:44:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:39.436 10:44:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:39.436 10:44:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:40.005 nvme0n1 00:15:40.005 10:44:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:40.005 10:44:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.005 10:44:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.005 10:44:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.005 10:44:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:40.005 10:44:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:40.571 10:44:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:15:40.571 10:44:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:15:40.571 10:44:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.572 10:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.572 10:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:40.572 10:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.572 10:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.572 10:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.572 10:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:15:40.572 10:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:15:40.831 10:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:15:40.831 10:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:15:40.831 10:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.090 10:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.090 10:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:41.090 10:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.090 10:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.090 10:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.090 10:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:41.090 10:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:41.090 10:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:41.090 10:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:15:41.090 10:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:41.090 10:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:15:41.090 10:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:41.090 10:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:41.090 10:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:41.349 request: 00:15:41.349 { 00:15:41.349 "name": "nvme0", 00:15:41.349 "dhchap_key": "key1", 00:15:41.349 "dhchap_ctrlr_key": "key3", 00:15:41.349 "method": "bdev_nvme_set_keys", 00:15:41.349 "req_id": 1 00:15:41.349 } 00:15:41.349 Got JSON-RPC error response 00:15:41.349 response: 00:15:41.349 { 00:15:41.349 "code": -13, 00:15:41.349 "message": "Permission denied" 00:15:41.349 } 00:15:41.349 10:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:41.349 10:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:41.349 10:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:41.349 10:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:41.349 10:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:15:41.349 10:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:15:41.349 10:44:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.607 10:44:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:15:41.607 10:44:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:15:42.542 10:44:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:15:42.542 10:44:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:15:42.542 10:44:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.800 10:44:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:15:42.800 10:44:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:42.800 10:44:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.800 10:44:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.800 10:44:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.800 10:44:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:42.800 10:44:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:42.800 10:44:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:43.734 nvme0n1 00:15:43.734 10:44:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:43.734 10:44:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.734 10:44:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.734 10:44:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.735 10:44:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:43.735 10:44:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:43.735 10:44:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:43.735 10:44:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:15:43.735 10:44:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:43.735 10:44:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:15:43.735 10:44:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:43.735 10:44:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:43.735 10:44:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:43.992 request: 00:15:43.992 { 00:15:43.992 "name": "nvme0", 00:15:43.992 "dhchap_key": "key2", 00:15:43.992 "dhchap_ctrlr_key": "key0", 00:15:43.992 "method": "bdev_nvme_set_keys", 00:15:43.992 "req_id": 1 00:15:43.992 } 00:15:43.992 Got JSON-RPC error response 00:15:43.992 response: 00:15:43.992 { 00:15:43.992 "code": -13, 00:15:43.992 "message": "Permission denied" 00:15:43.992 } 00:15:43.992 10:44:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:43.992 10:44:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:43.992 10:44:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:43.992 10:44:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:43.992 10:44:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:15:43.992 10:44:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.992 10:44:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:15:44.250 10:44:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:15:44.250 10:44:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:15:45.185 10:44:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:15:45.185 10:44:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:15:45.185 10:44:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.444 10:44:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:15:45.444 10:44:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:15:45.444 10:44:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:15:45.444 10:44:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3752364 00:15:45.444 10:44:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 3752364 ']' 00:15:45.444 10:44:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 3752364 00:15:45.444 10:44:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:15:45.444 10:44:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:45.444 10:44:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3752364 00:15:45.444 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:15:45.444 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:15:45.444 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3752364' 00:15:45.444 killing process with pid 3752364 00:15:45.444 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 3752364 00:15:45.444 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 3752364 00:15:45.703 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:15:45.703 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:45.703 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:15:45.703 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:15:45.703 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:15:45.703 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:15:45.703 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:45.703 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:15:45.703 rmmod nvme_rdma 00:15:45.703 rmmod nvme_fabrics 00:15:45.962 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:45.962 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:15:45.962 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:15:45.962 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 3776352 ']' 00:15:45.962 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 3776352 00:15:45.962 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 3776352 ']' 00:15:45.962 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 3776352 00:15:45.962 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:15:45.962 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:45.962 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3776352 00:15:45.962 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:45.962 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:45.962 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3776352' 00:15:45.962 killing process with pid 3776352 00:15:45.962 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 3776352 00:15:45.962 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 3776352 00:15:46.221 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:46.221 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:15:46.221 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.s5A /tmp/spdk.key-sha256.8dc /tmp/spdk.key-sha384.Yy8 /tmp/spdk.key-sha512.JXF /tmp/spdk.key-sha512.dCV /tmp/spdk.key-sha384.IdO /tmp/spdk.key-sha256.JvP '' /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf-auth.log 00:15:46.221 00:15:46.221 real 2m42.856s 00:15:46.221 user 6m12.517s 00:15:46.221 sys 0m24.832s 00:15:46.221 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:46.221 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.221 ************************************ 00:15:46.221 END TEST nvmf_auth_target 00:15:46.221 ************************************ 00:15:46.221 10:44:13 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' rdma = tcp ']' 00:15:46.221 10:44:13 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:15:46.221 10:44:13 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:15:46.221 10:44:13 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' rdma = tcp ']' 00:15:46.221 10:44:13 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@60 -- # [[ rdma == \r\d\m\a ]] 00:15:46.221 10:44:13 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:15:46.221 10:44:13 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:46.221 10:44:13 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:46.221 10:44:13 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:46.221 ************************************ 00:15:46.221 START TEST nvmf_srq_overwhelm 00:15:46.221 ************************************ 00:15:46.221 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:15:46.221 * Looking for test storage... 00:15:46.221 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:46.221 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:46.221 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1691 -- # lcov --version 00:15:46.221 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:46.221 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:46.221 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:46.221 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:46.221 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:46.221 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # IFS=.-: 00:15:46.221 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # read -ra ver1 00:15:46.221 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # IFS=.-: 00:15:46.221 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # read -ra ver2 00:15:46.221 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@338 -- # local 'op=<' 00:15:46.221 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@340 -- # ver1_l=2 00:15:46.221 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@341 -- # ver2_l=1 00:15:46.221 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:46.221 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@344 -- # case "$op" in 00:15:46.221 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@345 -- # : 1 00:15:46.221 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:46.221 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:46.221 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # decimal 1 00:15:46.221 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=1 00:15:46.221 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:46.221 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 1 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # ver1[v]=1 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # decimal 2 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=2 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 2 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # ver2[v]=2 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # return 0 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:46.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.481 --rc genhtml_branch_coverage=1 00:15:46.481 --rc genhtml_function_coverage=1 00:15:46.481 --rc genhtml_legend=1 00:15:46.481 --rc geninfo_all_blocks=1 00:15:46.481 --rc geninfo_unexecuted_blocks=1 00:15:46.481 00:15:46.481 ' 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:46.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.481 --rc genhtml_branch_coverage=1 00:15:46.481 --rc genhtml_function_coverage=1 00:15:46.481 --rc genhtml_legend=1 00:15:46.481 --rc geninfo_all_blocks=1 00:15:46.481 --rc geninfo_unexecuted_blocks=1 00:15:46.481 00:15:46.481 ' 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:46.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.481 --rc genhtml_branch_coverage=1 00:15:46.481 --rc genhtml_function_coverage=1 00:15:46.481 --rc genhtml_legend=1 00:15:46.481 --rc geninfo_all_blocks=1 00:15:46.481 --rc geninfo_unexecuted_blocks=1 00:15:46.481 00:15:46.481 ' 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:46.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.481 --rc genhtml_branch_coverage=1 00:15:46.481 --rc genhtml_function_coverage=1 00:15:46.481 --rc genhtml_legend=1 00:15:46.481 --rc geninfo_all_blocks=1 00:15:46.481 --rc geninfo_unexecuted_blocks=1 00:15:46.481 00:15:46.481 ' 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@15 -- # shopt -s extglob 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # : 0 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:46.481 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:15:46.481 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:46.482 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:46.482 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:46.482 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:46.482 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:46.482 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:46.482 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:46.482 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:46.482 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:46.482 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@309 -- # xtrace_disable 00:15:46.482 10:44:13 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:53.050 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:53.050 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # pci_devs=() 00:15:53.050 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:53.050 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:53.050 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:53.050 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:53.050 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:53.050 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # net_devs=() 00:15:53.050 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:53.050 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # e810=() 00:15:53.050 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # local -ga e810 00:15:53.050 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # x722=() 00:15:53.050 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # local -ga x722 00:15:53.050 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # mlx=() 00:15:53.050 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # local -ga mlx 00:15:53.050 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:53.050 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:53.050 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:53.050 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:53.050 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:53.050 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:53.050 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:53.050 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:53.050 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:53.050 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:53.050 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:53.050 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:15:53.051 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:15:53.051 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:15:53.051 Found net devices under 0000:d9:00.0: mlx_0_0 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:15:53.051 Found net devices under 0000:d9:00.1: mlx_0_1 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # is_hw=yes 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@448 -- # rdma_device_init 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # uname 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe ib_cm 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe ib_core 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe ib_umad 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@70 -- # modprobe iw_cm 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@530 -- # allocate_nic_ips 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # get_rdma_if_list 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:15:53.051 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:53.051 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:15:53.051 altname enp217s0f0np0 00:15:53.051 altname ens818f0np0 00:15:53.051 inet 192.168.100.8/24 scope global mlx_0_0 00:15:53.051 valid_lft forever preferred_lft forever 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:15:53.051 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:53.051 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:15:53.051 altname enp217s0f1np1 00:15:53.051 altname ens818f1np1 00:15:53.051 inet 192.168.100.9/24 scope global mlx_0_1 00:15:53.051 valid_lft forever preferred_lft forever 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@450 -- # return 0 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:15:53.051 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # get_rdma_if_list 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:15:53.052 192.168.100.9' 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:15:53.052 192.168.100.9' 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # head -n 1 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:15:53.052 192.168.100.9' 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # tail -n +2 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # head -n 1 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@509 -- # nvmfpid=3783155 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@510 -- # waitforlisten 3783155 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@833 -- # '[' -z 3783155 ']' 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:53.052 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:53.052 [2024-11-07 10:44:20.634438] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:15:53.052 [2024-11-07 10:44:20.634492] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:53.052 [2024-11-07 10:44:20.711556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:53.312 [2024-11-07 10:44:20.753407] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:53.312 [2024-11-07 10:44:20.753443] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:53.312 [2024-11-07 10:44:20.753453] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:53.312 [2024-11-07 10:44:20.753461] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:53.312 [2024-11-07 10:44:20.753468] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:53.312 [2024-11-07 10:44:20.755216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:53.312 [2024-11-07 10:44:20.755311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:53.312 [2024-11-07 10:44:20.755406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:53.312 [2024-11-07 10:44:20.755408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.312 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:53.312 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@866 -- # return 0 00:15:53.312 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:53.312 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:53.312 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:53.312 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:53.312 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:15:53.312 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.312 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:53.312 [2024-11-07 10:44:20.925610] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1051df0/0x10562e0) succeed. 00:15:53.312 [2024-11-07 10:44:20.934959] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1053480/0x1097980) succeed. 00:15:53.571 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.571 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:15:53.571 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:15:53.571 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:15:53.571 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.571 10:44:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:53.571 10:44:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.571 10:44:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:53.571 10:44:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.571 10:44:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:53.571 Malloc0 00:15:53.571 10:44:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.571 10:44:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:15:53.571 10:44:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.571 10:44:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:53.571 10:44:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.571 10:44:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:15:53.571 10:44:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.571 10:44:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:53.571 [2024-11-07 10:44:21.048263] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:53.571 10:44:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.571 10:44:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:15:54.507 10:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:15:54.507 10:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1237 -- # local i=0 00:15:54.507 10:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:15:54.507 10:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:15:54.507 10:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:15:54.507 10:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:15:54.507 10:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1248 -- # return 0 00:15:54.507 10:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:15:54.507 10:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:54.507 10:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.507 10:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:54.507 10:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.507 10:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:54.507 10:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.507 10:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:54.507 Malloc1 00:15:54.507 10:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.507 10:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:54.507 10:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.507 10:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:54.507 10:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.507 10:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:54.507 10:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.507 10:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:54.507 10:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.507 10:44:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:55.443 10:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:15:55.443 10:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1237 -- # local i=0 00:15:55.443 10:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:15:55.443 10:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # grep -q -w nvme1n1 00:15:55.443 10:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:15:55.443 10:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # grep -q -w nvme1n1 00:15:55.443 10:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1248 -- # return 0 00:15:55.443 10:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:15:55.443 10:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:15:55.443 10:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.443 10:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:55.443 10:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.443 10:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:15:55.443 10:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.443 10:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:55.702 Malloc2 00:15:55.702 10:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.702 10:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:15:55.702 10:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.702 10:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:55.702 10:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.702 10:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:15:55.702 10:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.702 10:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:55.702 10:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.702 10:44:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:15:56.639 10:44:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:15:56.639 10:44:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1237 -- # local i=0 00:15:56.639 10:44:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:15:56.639 10:44:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # grep -q -w nvme2n1 00:15:56.639 10:44:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:15:56.639 10:44:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # grep -q -w nvme2n1 00:15:56.639 10:44:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1248 -- # return 0 00:15:56.639 10:44:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:15:56.639 10:44:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:15:56.639 10:44:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.639 10:44:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:56.639 10:44:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.639 10:44:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:15:56.639 10:44:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.639 10:44:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:56.639 Malloc3 00:15:56.639 10:44:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.639 10:44:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:15:56.639 10:44:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.639 10:44:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:56.639 10:44:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.639 10:44:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:15:56.639 10:44:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.639 10:44:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:56.639 10:44:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.639 10:44:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:15:57.575 10:44:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:15:57.575 10:44:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1237 -- # local i=0 00:15:57.575 10:44:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:15:57.575 10:44:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # grep -q -w nvme3n1 00:15:57.575 10:44:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # grep -q -w nvme3n1 00:15:57.575 10:44:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:15:57.575 10:44:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1248 -- # return 0 00:15:57.575 10:44:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:15:57.575 10:44:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:15:57.575 10:44:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.575 10:44:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:57.575 10:44:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.575 10:44:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:15:57.575 10:44:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.575 10:44:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:57.575 Malloc4 00:15:57.575 10:44:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.575 10:44:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:15:57.575 10:44:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.575 10:44:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:57.834 10:44:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.834 10:44:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:15:57.834 10:44:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.834 10:44:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:57.834 10:44:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.834 10:44:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:15:58.770 10:44:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:15:58.770 10:44:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1237 -- # local i=0 00:15:58.770 10:44:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:15:58.770 10:44:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # grep -q -w nvme4n1 00:15:58.770 10:44:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:15:58.771 10:44:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # grep -q -w nvme4n1 00:15:58.771 10:44:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1248 -- # return 0 00:15:58.771 10:44:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:15:58.771 10:44:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:15:58.771 10:44:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.771 10:44:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:58.771 10:44:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.771 10:44:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:15:58.771 10:44:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.771 10:44:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:58.771 Malloc5 00:15:58.771 10:44:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.771 10:44:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:15:58.771 10:44:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.771 10:44:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:58.771 10:44:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.771 10:44:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:15:58.771 10:44:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.771 10:44:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:58.771 10:44:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.771 10:44:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:15:59.707 10:44:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:15:59.707 10:44:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1237 -- # local i=0 00:15:59.707 10:44:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:15:59.707 10:44:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # grep -q -w nvme5n1 00:15:59.707 10:44:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:15:59.707 10:44:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1244 -- # grep -q -w nvme5n1 00:15:59.707 10:44:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1248 -- # return 0 00:15:59.707 10:44:27 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:15:59.707 [global] 00:15:59.707 thread=1 00:15:59.707 invalidate=1 00:15:59.707 rw=read 00:15:59.707 time_based=1 00:15:59.707 runtime=10 00:15:59.707 ioengine=libaio 00:15:59.707 direct=1 00:15:59.707 bs=1048576 00:15:59.707 iodepth=128 00:15:59.707 norandommap=1 00:15:59.707 numjobs=13 00:15:59.707 00:15:59.707 [job0] 00:15:59.707 filename=/dev/nvme0n1 00:15:59.707 [job1] 00:15:59.707 filename=/dev/nvme1n1 00:15:59.707 [job2] 00:15:59.707 filename=/dev/nvme2n1 00:15:59.707 [job3] 00:15:59.707 filename=/dev/nvme3n1 00:15:59.707 [job4] 00:15:59.707 filename=/dev/nvme4n1 00:15:59.966 [job5] 00:15:59.966 filename=/dev/nvme5n1 00:15:59.966 Could not set queue depth (nvme0n1) 00:15:59.966 Could not set queue depth (nvme1n1) 00:15:59.966 Could not set queue depth (nvme2n1) 00:15:59.966 Could not set queue depth (nvme3n1) 00:15:59.966 Could not set queue depth (nvme4n1) 00:15:59.966 Could not set queue depth (nvme5n1) 00:16:00.224 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:16:00.224 ... 00:16:00.224 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:16:00.224 ... 00:16:00.224 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:16:00.224 ... 00:16:00.224 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:16:00.224 ... 00:16:00.224 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:16:00.224 ... 00:16:00.224 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:16:00.224 ... 00:16:00.224 fio-3.35 00:16:00.224 Starting 78 threads 00:16:15.153 00:16:15.153 job0: (groupid=0, jobs=1): err= 0: pid=3784624: Thu Nov 7 10:44:42 2024 00:16:15.153 read: IOPS=5, BW=6051KiB/s (6196kB/s)(71.0MiB/12016msec) 00:16:15.153 slat (usec): min=958, max=2076.7k, avg=168244.92, stdev=543289.12 00:16:15.153 clat (msec): min=69, max=12013, avg=7884.56, stdev=3887.30 00:16:15.153 lat (msec): min=2129, max=12015, avg=8052.80, stdev=3801.79 00:16:15.153 clat percentiles (msec): 00:16:15.153 | 1.00th=[ 70], 5.00th=[ 2140], 10.00th=[ 2165], 20.00th=[ 4279], 00:16:15.153 | 30.00th=[ 4329], 40.00th=[ 6477], 50.00th=[ 8658], 60.00th=[10671], 00:16:15.153 | 70.00th=[11879], 80.00th=[12013], 90.00th=[12013], 95.00th=[12013], 00:16:15.153 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:16:15.153 | 99.99th=[12013] 00:16:15.153 lat (msec) : 100=1.41%, >=2000=98.59% 00:16:15.153 cpu : usr=0.00%, sys=0.61%, ctx=74, majf=0, minf=18177 00:16:15.153 IO depths : 1=1.4%, 2=2.8%, 4=5.6%, 8=11.3%, 16=22.5%, 32=45.1%, >=64=11.3% 00:16:15.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.153 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:15.153 issued rwts: total=71,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.153 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.153 job0: (groupid=0, jobs=1): err= 0: pid=3784625: Thu Nov 7 10:44:42 2024 00:16:15.153 read: IOPS=11, BW=11.6MiB/s (12.2MB/s)(139MiB/11969msec) 00:16:15.153 slat (usec): min=700, max=3035.7k, avg=85635.87, stdev=405837.08 00:16:15.153 clat (msec): min=64, max=11944, avg=7315.45, stdev=2604.72 00:16:15.153 lat (msec): min=2155, max=11946, avg=7401.09, stdev=2559.62 00:16:15.153 clat percentiles (msec): 00:16:15.153 | 1.00th=[ 2165], 5.00th=[ 5537], 10.00th=[ 5604], 20.00th=[ 5738], 00:16:15.153 | 30.00th=[ 5873], 40.00th=[ 6074], 50.00th=[ 6208], 60.00th=[ 6409], 00:16:15.153 | 70.00th=[ 6477], 80.00th=[11745], 90.00th=[11745], 95.00th=[11879], 00:16:15.153 | 99.00th=[11879], 99.50th=[11879], 99.90th=[11879], 99.95th=[11879], 00:16:15.153 | 99.99th=[11879] 00:16:15.153 bw ( KiB/s): min= 3319, max=12288, per=0.26%, avg=7250.33, stdev=4585.71, samples=3 00:16:15.153 iops : min= 3, max= 12, avg= 7.00, stdev= 4.58, samples=3 00:16:15.153 lat (msec) : 100=0.72%, >=2000=99.28% 00:16:15.153 cpu : usr=0.01%, sys=0.76%, ctx=267, majf=0, minf=32769 00:16:15.153 IO depths : 1=0.7%, 2=1.4%, 4=2.9%, 8=5.8%, 16=11.5%, 32=23.0%, >=64=54.7% 00:16:15.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.153 complete : 0=0.0%, 4=92.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=7.7% 00:16:15.153 issued rwts: total=139,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.153 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.153 job0: (groupid=0, jobs=1): err= 0: pid=3784626: Thu Nov 7 10:44:42 2024 00:16:15.153 read: IOPS=22, BW=22.9MiB/s (24.0MB/s)(322MiB/14066msec) 00:16:15.153 slat (usec): min=50, max=2138.3k, avg=31053.55, stdev=229473.18 00:16:15.153 clat (msec): min=115, max=9280, avg=5140.89, stdev=2258.36 00:16:15.153 lat (msec): min=116, max=9281, avg=5171.94, stdev=2276.54 00:16:15.153 clat percentiles (msec): 00:16:15.153 | 1.00th=[ 116], 5.00th=[ 142], 10.00th=[ 4077], 20.00th=[ 4111], 00:16:15.153 | 30.00th=[ 4144], 40.00th=[ 4178], 50.00th=[ 4329], 60.00th=[ 4396], 00:16:15.153 | 70.00th=[ 5470], 80.00th=[ 7148], 90.00th=[ 8926], 95.00th=[ 9194], 00:16:15.153 | 99.00th=[ 9194], 99.50th=[ 9194], 99.90th=[ 9329], 99.95th=[ 9329], 00:16:15.153 | 99.99th=[ 9329] 00:16:15.153 bw ( KiB/s): min=10240, max=284887, per=4.62%, avg=130461.00, stdev=140482.13, samples=3 00:16:15.153 iops : min= 10, max= 278, avg=127.33, stdev=137.07, samples=3 00:16:15.153 lat (msec) : 250=6.52%, >=2000=93.48% 00:16:15.153 cpu : usr=0.00%, sys=0.82%, ctx=382, majf=0, minf=32769 00:16:15.153 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.5%, 16=5.0%, 32=9.9%, >=64=80.4% 00:16:15.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.153 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:16:15.153 issued rwts: total=322,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.153 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.153 job0: (groupid=0, jobs=1): err= 0: pid=3784627: Thu Nov 7 10:44:42 2024 00:16:15.153 read: IOPS=13, BW=13.9MiB/s (14.6MB/s)(195MiB/14028msec) 00:16:15.153 slat (usec): min=96, max=2138.3k, avg=61037.75, stdev=331031.91 00:16:15.153 clat (msec): min=680, max=13629, avg=8888.44, stdev=5409.63 00:16:15.153 lat (msec): min=683, max=13630, avg=8949.48, stdev=5395.51 00:16:15.153 clat percentiles (msec): 00:16:15.153 | 1.00th=[ 684], 5.00th=[ 718], 10.00th=[ 735], 20.00th=[ 802], 00:16:15.153 | 30.00th=[ 5134], 40.00th=[ 8557], 50.00th=[12953], 60.00th=[13221], 00:16:15.153 | 70.00th=[13355], 80.00th=[13355], 90.00th=[13489], 95.00th=[13624], 00:16:15.153 | 99.00th=[13624], 99.50th=[13624], 99.90th=[13624], 99.95th=[13624], 00:16:15.154 | 99.99th=[13624] 00:16:15.154 bw ( KiB/s): min= 2052, max=63488, per=0.61%, avg=17195.12, stdev=21261.10, samples=8 00:16:15.154 iops : min= 2, max= 62, avg=16.75, stdev=20.78, samples=8 00:16:15.154 lat (msec) : 750=11.79%, 1000=12.82%, >=2000=75.38% 00:16:15.154 cpu : usr=0.00%, sys=0.91%, ctx=184, majf=0, minf=32769 00:16:15.154 IO depths : 1=0.5%, 2=1.0%, 4=2.1%, 8=4.1%, 16=8.2%, 32=16.4%, >=64=67.7% 00:16:15.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.154 complete : 0=0.0%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.4% 00:16:15.154 issued rwts: total=195,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.154 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.154 job0: (groupid=0, jobs=1): err= 0: pid=3784628: Thu Nov 7 10:44:42 2024 00:16:15.154 read: IOPS=5, BW=5622KiB/s (5757kB/s)(66.0MiB/12022msec) 00:16:15.154 slat (usec): min=399, max=3236.1k, avg=151652.83, stdev=584830.30 00:16:15.154 clat (msec): min=2012, max=12019, avg=8635.30, stdev=4432.33 00:16:15.154 lat (msec): min=2036, max=12021, avg=8786.95, stdev=4373.08 00:16:15.154 clat percentiles (msec): 00:16:15.154 | 1.00th=[ 2005], 5.00th=[ 2039], 10.00th=[ 2056], 20.00th=[ 2140], 00:16:15.154 | 30.00th=[ 4279], 40.00th=[11879], 50.00th=[11879], 60.00th=[12013], 00:16:15.154 | 70.00th=[12013], 80.00th=[12013], 90.00th=[12013], 95.00th=[12013], 00:16:15.154 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:16:15.154 | 99.99th=[12013] 00:16:15.154 lat (msec) : >=2000=100.00% 00:16:15.154 cpu : usr=0.00%, sys=0.47%, ctx=100, majf=0, minf=16897 00:16:15.154 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.1%, 16=24.2%, 32=48.5%, >=64=4.5% 00:16:15.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.154 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:15.154 issued rwts: total=66,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.154 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.154 job0: (groupid=0, jobs=1): err= 0: pid=3784629: Thu Nov 7 10:44:42 2024 00:16:15.154 read: IOPS=1, BW=1501KiB/s (1537kB/s)(19.0MiB/12959msec) 00:16:15.154 slat (usec): min=1191, max=2136.0k, avg=568574.68, stdev=945726.58 00:16:15.154 clat (msec): min=2155, max=12928, avg=9924.92, stdev=3746.64 00:16:15.154 lat (msec): min=4239, max=12958, avg=10493.50, stdev=3294.54 00:16:15.154 clat percentiles (msec): 00:16:15.154 | 1.00th=[ 2165], 5.00th=[ 2165], 10.00th=[ 4245], 20.00th=[ 6342], 00:16:15.154 | 30.00th=[ 6409], 40.00th=[10671], 50.00th=[12818], 60.00th=[12818], 00:16:15.154 | 70.00th=[12953], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:16:15.154 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:16:15.154 | 99.99th=[12953] 00:16:15.154 lat (msec) : >=2000=100.00% 00:16:15.154 cpu : usr=0.01%, sys=0.15%, ctx=44, majf=0, minf=4865 00:16:15.154 IO depths : 1=5.3%, 2=10.5%, 4=21.1%, 8=42.1%, 16=21.1%, 32=0.0%, >=64=0.0% 00:16:15.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.154 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:16:15.154 issued rwts: total=19,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.154 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.154 job0: (groupid=0, jobs=1): err= 0: pid=3784630: Thu Nov 7 10:44:42 2024 00:16:15.154 read: IOPS=2, BW=2357KiB/s (2414kB/s)(30.0MiB/13032msec) 00:16:15.154 slat (usec): min=855, max=2165.2k, avg=362789.85, stdev=808065.94 00:16:15.154 clat (msec): min=2147, max=13029, avg=11654.29, stdev=2787.66 00:16:15.154 lat (msec): min=4239, max=13030, avg=12017.08, stdev=2140.92 00:16:15.154 clat percentiles (msec): 00:16:15.154 | 1.00th=[ 2140], 5.00th=[ 4245], 10.00th=[ 6409], 20.00th=[10671], 00:16:15.154 | 30.00th=[12818], 40.00th=[12953], 50.00th=[12953], 60.00th=[12953], 00:16:15.154 | 70.00th=[12953], 80.00th=[13087], 90.00th=[13087], 95.00th=[13087], 00:16:15.154 | 99.00th=[13087], 99.50th=[13087], 99.90th=[13087], 99.95th=[13087], 00:16:15.154 | 99.99th=[13087] 00:16:15.154 lat (msec) : >=2000=100.00% 00:16:15.154 cpu : usr=0.00%, sys=0.25%, ctx=52, majf=0, minf=7681 00:16:15.154 IO depths : 1=3.3%, 2=6.7%, 4=13.3%, 8=26.7%, 16=50.0%, 32=0.0%, >=64=0.0% 00:16:15.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.154 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:16:15.154 issued rwts: total=30,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.154 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.154 job0: (groupid=0, jobs=1): err= 0: pid=3784631: Thu Nov 7 10:44:42 2024 00:16:15.154 read: IOPS=80, BW=80.7MiB/s (84.6MB/s)(965MiB/11965msec) 00:16:15.154 slat (usec): min=42, max=2071.3k, avg=12334.39, stdev=96693.13 00:16:15.154 clat (msec): min=56, max=5285, avg=1418.26, stdev=1335.45 00:16:15.154 lat (msec): min=424, max=5287, avg=1430.59, stdev=1338.34 00:16:15.154 clat percentiles (msec): 00:16:15.154 | 1.00th=[ 443], 5.00th=[ 518], 10.00th=[ 542], 20.00th=[ 676], 00:16:15.154 | 30.00th=[ 818], 40.00th=[ 894], 50.00th=[ 961], 60.00th=[ 1083], 00:16:15.154 | 70.00th=[ 1133], 80.00th=[ 1183], 90.00th=[ 4329], 95.00th=[ 5134], 00:16:15.154 | 99.00th=[ 5269], 99.50th=[ 5269], 99.90th=[ 5269], 99.95th=[ 5269], 00:16:15.154 | 99.99th=[ 5269] 00:16:15.154 bw ( KiB/s): min= 6681, max=247808, per=4.33%, avg=122333.21, stdev=76795.65, samples=14 00:16:15.154 iops : min= 6, max= 242, avg=119.43, stdev=75.06, samples=14 00:16:15.154 lat (msec) : 100=0.10%, 500=2.80%, 750=23.11%, 1000=27.36%, 2000=33.06% 00:16:15.154 lat (msec) : >=2000=13.58% 00:16:15.154 cpu : usr=0.04%, sys=1.73%, ctx=1029, majf=0, minf=32769 00:16:15.154 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.3%, >=64=93.5% 00:16:15.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.154 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:15.154 issued rwts: total=965,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.154 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.154 job0: (groupid=0, jobs=1): err= 0: pid=3784633: Thu Nov 7 10:44:42 2024 00:16:15.154 read: IOPS=55, BW=55.2MiB/s (57.9MB/s)(655MiB/11868msec) 00:16:15.154 slat (usec): min=46, max=2093.5k, avg=18023.67, stdev=150149.76 00:16:15.154 clat (msec): min=59, max=7187, avg=2214.45, stdev=1975.42 00:16:15.154 lat (msec): min=425, max=7198, avg=2232.48, stdev=1987.12 00:16:15.154 clat percentiles (msec): 00:16:15.154 | 1.00th=[ 430], 5.00th=[ 464], 10.00th=[ 510], 20.00th=[ 659], 00:16:15.154 | 30.00th=[ 726], 40.00th=[ 735], 50.00th=[ 802], 60.00th=[ 2467], 00:16:15.154 | 70.00th=[ 2702], 80.00th=[ 3071], 90.00th=[ 4866], 95.00th=[ 6812], 00:16:15.154 | 99.00th=[ 6879], 99.50th=[ 6879], 99.90th=[ 7215], 99.95th=[ 7215], 00:16:15.154 | 99.99th=[ 7215] 00:16:15.154 bw ( KiB/s): min= 6144, max=198656, per=3.82%, avg=107767.90, stdev=65996.54, samples=10 00:16:15.154 iops : min= 6, max= 194, avg=105.20, stdev=64.49, samples=10 00:16:15.154 lat (msec) : 100=0.15%, 500=8.85%, 750=35.42%, 1000=6.41%, >=2000=49.16% 00:16:15.154 cpu : usr=0.05%, sys=1.37%, ctx=670, majf=0, minf=32769 00:16:15.154 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.9%, >=64=90.4% 00:16:15.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.154 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:16:15.154 issued rwts: total=655,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.154 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.154 job0: (groupid=0, jobs=1): err= 0: pid=3784634: Thu Nov 7 10:44:42 2024 00:16:15.154 read: IOPS=1, BW=1882KiB/s (1927kB/s)(24.0MiB/13059msec) 00:16:15.154 slat (usec): min=1211, max=3247.3k, avg=454294.54, stdev=1038849.53 00:16:15.154 clat (msec): min=2155, max=13056, avg=11498.76, stdev=3215.93 00:16:15.154 lat (msec): min=4273, max=13058, avg=11953.05, stdev=2537.20 00:16:15.154 clat percentiles (msec): 00:16:15.154 | 1.00th=[ 2165], 5.00th=[ 4279], 10.00th=[ 6409], 20.00th=[ 9731], 00:16:15.154 | 30.00th=[12953], 40.00th=[12953], 50.00th=[12953], 60.00th=[12953], 00:16:15.154 | 70.00th=[13087], 80.00th=[13087], 90.00th=[13087], 95.00th=[13087], 00:16:15.154 | 99.00th=[13087], 99.50th=[13087], 99.90th=[13087], 99.95th=[13087], 00:16:15.154 | 99.99th=[13087] 00:16:15.154 lat (msec) : >=2000=100.00% 00:16:15.154 cpu : usr=0.00%, sys=0.20%, ctx=55, majf=0, minf=6145 00:16:15.154 IO depths : 1=4.2%, 2=8.3%, 4=16.7%, 8=33.3%, 16=37.5%, 32=0.0%, >=64=0.0% 00:16:15.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.154 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:16:15.154 issued rwts: total=24,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.154 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.154 job0: (groupid=0, jobs=1): err= 0: pid=3784635: Thu Nov 7 10:44:42 2024 00:16:15.154 read: IOPS=177, BW=178MiB/s (186MB/s)(2503MiB/14074msec) 00:16:15.154 slat (usec): min=45, max=2081.9k, avg=4771.02, stdev=71943.51 00:16:15.154 clat (msec): min=117, max=8717, avg=693.06, stdev=1792.69 00:16:15.154 lat (msec): min=117, max=8718, avg=697.83, stdev=1799.25 00:16:15.154 clat percentiles (msec): 00:16:15.154 | 1.00th=[ 118], 5.00th=[ 120], 10.00th=[ 120], 20.00th=[ 120], 00:16:15.154 | 30.00th=[ 121], 40.00th=[ 142], 50.00th=[ 255], 60.00th=[ 338], 00:16:15.154 | 70.00th=[ 388], 80.00th=[ 409], 90.00th=[ 642], 95.00th=[ 4463], 00:16:15.154 | 99.00th=[ 8658], 99.50th=[ 8658], 99.90th=[ 8658], 99.95th=[ 8658], 00:16:15.154 | 99.99th=[ 8658] 00:16:15.154 bw ( KiB/s): min= 2052, max=1089536, per=12.32%, avg=347551.14, stdev=346420.70, samples=14 00:16:15.154 iops : min= 2, max= 1064, avg=339.36, stdev=338.35, samples=14 00:16:15.154 lat (msec) : 250=44.43%, 500=42.23%, 750=7.63%, >=2000=5.71% 00:16:15.154 cpu : usr=0.07%, sys=2.15%, ctx=2294, majf=0, minf=32769 00:16:15.154 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:16:15.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.154 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:15.154 issued rwts: total=2503,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.154 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.154 job0: (groupid=0, jobs=1): err= 0: pid=3784636: Thu Nov 7 10:44:42 2024 00:16:15.154 read: IOPS=3, BW=3583KiB/s (3668kB/s)(42.0MiB/12005msec) 00:16:15.154 slat (usec): min=696, max=3215.6k, avg=238181.53, stdev=717258.43 00:16:15.154 clat (msec): min=2001, max=12000, avg=7556.86, stdev=4536.74 00:16:15.154 lat (msec): min=2019, max=12004, avg=7795.04, stdev=4500.41 00:16:15.154 clat percentiles (msec): 00:16:15.154 | 1.00th=[ 2005], 5.00th=[ 2022], 10.00th=[ 2056], 20.00th=[ 2123], 00:16:15.154 | 30.00th=[ 2165], 40.00th=[ 4279], 50.00th=[ 8557], 60.00th=[12013], 00:16:15.154 | 70.00th=[12013], 80.00th=[12013], 90.00th=[12013], 95.00th=[12013], 00:16:15.154 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:16:15.154 | 99.99th=[12013] 00:16:15.155 lat (msec) : >=2000=100.00% 00:16:15.155 cpu : usr=0.01%, sys=0.28%, ctx=80, majf=0, minf=10753 00:16:15.155 IO depths : 1=2.4%, 2=4.8%, 4=9.5%, 8=19.0%, 16=38.1%, 32=26.2%, >=64=0.0% 00:16:15.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.155 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:15.155 issued rwts: total=42,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.155 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.155 job0: (groupid=0, jobs=1): err= 0: pid=3784637: Thu Nov 7 10:44:42 2024 00:16:15.155 read: IOPS=97, BW=98.0MiB/s (103MB/s)(1264MiB/12901msec) 00:16:15.155 slat (usec): min=51, max=2092.6k, avg=8493.74, stdev=101535.09 00:16:15.155 clat (msec): min=240, max=8807, avg=1252.31, stdev=2431.61 00:16:15.155 lat (msec): min=241, max=8808, avg=1260.80, stdev=2439.73 00:16:15.155 clat percentiles (msec): 00:16:15.155 | 1.00th=[ 251], 5.00th=[ 259], 10.00th=[ 296], 20.00th=[ 380], 00:16:15.155 | 30.00th=[ 388], 40.00th=[ 393], 50.00th=[ 405], 60.00th=[ 414], 00:16:15.155 | 70.00th=[ 435], 80.00th=[ 567], 90.00th=[ 4530], 95.00th=[ 8658], 00:16:15.155 | 99.00th=[ 8792], 99.50th=[ 8792], 99.90th=[ 8792], 99.95th=[ 8792], 00:16:15.155 | 99.99th=[ 8792] 00:16:15.155 bw ( KiB/s): min= 2048, max=385024, per=6.88%, avg=194048.67, stdev=161816.18, samples=12 00:16:15.155 iops : min= 2, max= 376, avg=189.50, stdev=158.02, samples=12 00:16:15.155 lat (msec) : 250=0.87%, 500=76.58%, 750=6.17%, 1000=5.46%, >=2000=10.92% 00:16:15.155 cpu : usr=0.00%, sys=1.45%, ctx=1149, majf=0, minf=32769 00:16:15.155 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.5%, >=64=95.0% 00:16:15.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.155 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:15.155 issued rwts: total=1264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.155 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.155 job1: (groupid=0, jobs=1): err= 0: pid=3784666: Thu Nov 7 10:44:42 2024 00:16:15.155 read: IOPS=3, BW=3497KiB/s (3580kB/s)(44.0MiB/12886msec) 00:16:15.155 slat (usec): min=976, max=2120.4k, avg=243848.02, stdev=665796.34 00:16:15.155 clat (msec): min=2156, max=12884, avg=9755.88, stdev=3481.29 00:16:15.155 lat (msec): min=4211, max=12885, avg=9999.73, stdev=3308.04 00:16:15.155 clat percentiles (msec): 00:16:15.155 | 1.00th=[ 2165], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 6342], 00:16:15.155 | 30.00th=[ 6409], 40.00th=[10671], 50.00th=[10671], 60.00th=[12818], 00:16:15.155 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:16:15.155 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:16:15.155 | 99.99th=[12818] 00:16:15.155 lat (msec) : >=2000=100.00% 00:16:15.155 cpu : usr=0.00%, sys=0.34%, ctx=37, majf=0, minf=11265 00:16:15.155 IO depths : 1=2.3%, 2=4.5%, 4=9.1%, 8=18.2%, 16=36.4%, 32=29.5%, >=64=0.0% 00:16:15.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.155 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:15.155 issued rwts: total=44,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.155 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.155 job1: (groupid=0, jobs=1): err= 0: pid=3784667: Thu Nov 7 10:44:42 2024 00:16:15.155 read: IOPS=17, BW=17.9MiB/s (18.8MB/s)(213MiB/11902msec) 00:16:15.155 slat (usec): min=53, max=2075.8k, avg=55535.37, stdev=301746.35 00:16:15.155 clat (msec): min=71, max=10604, avg=5991.99, stdev=1387.77 00:16:15.155 lat (msec): min=2131, max=10717, avg=6047.52, stdev=1361.75 00:16:15.155 clat percentiles (msec): 00:16:15.155 | 1.00th=[ 2165], 5.00th=[ 4329], 10.00th=[ 4732], 20.00th=[ 4933], 00:16:15.155 | 30.00th=[ 5201], 40.00th=[ 5470], 50.00th=[ 6275], 60.00th=[ 6342], 00:16:15.155 | 70.00th=[ 6409], 80.00th=[ 6409], 90.00th=[ 8490], 95.00th=[ 8557], 00:16:15.155 | 99.00th=[ 8658], 99.50th=[ 8658], 99.90th=[10671], 99.95th=[10671], 00:16:15.155 | 99.99th=[10671] 00:16:15.155 bw ( KiB/s): min= 4096, max=147456, per=1.53%, avg=43283.00, stdev=69531.08, samples=4 00:16:15.155 iops : min= 4, max= 144, avg=42.25, stdev=67.91, samples=4 00:16:15.155 lat (msec) : 100=0.47%, >=2000=99.53% 00:16:15.155 cpu : usr=0.01%, sys=0.73%, ctx=426, majf=0, minf=32769 00:16:15.155 IO depths : 1=0.5%, 2=0.9%, 4=1.9%, 8=3.8%, 16=7.5%, 32=15.0%, >=64=70.4% 00:16:15.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.155 complete : 0=0.0%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.1% 00:16:15.155 issued rwts: total=213,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.155 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.155 job1: (groupid=0, jobs=1): err= 0: pid=3784668: Thu Nov 7 10:44:42 2024 00:16:15.155 read: IOPS=12, BW=12.0MiB/s (12.6MB/s)(157MiB/13068msec) 00:16:15.155 slat (usec): min=320, max=3126.7k, avg=69487.76, stdev=372370.94 00:16:15.155 clat (msec): min=1317, max=12975, avg=10082.95, stdev=3305.43 00:16:15.155 lat (msec): min=1319, max=12981, avg=10152.44, stdev=3250.81 00:16:15.155 clat percentiles (msec): 00:16:15.155 | 1.00th=[ 1318], 5.00th=[ 3440], 10.00th=[ 5403], 20.00th=[ 7483], 00:16:15.155 | 30.00th=[ 7617], 40.00th=[11879], 50.00th=[12013], 60.00th=[12147], 00:16:15.155 | 70.00th=[12147], 80.00th=[12416], 90.00th=[12684], 95.00th=[12818], 00:16:15.155 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:16:15.155 | 99.99th=[12953] 00:16:15.155 bw ( KiB/s): min= 2048, max=26624, per=0.44%, avg=12288.00, stdev=9385.12, samples=5 00:16:15.155 iops : min= 2, max= 26, avg=12.00, stdev= 9.17, samples=5 00:16:15.155 lat (msec) : 2000=3.82%, >=2000=96.18% 00:16:15.155 cpu : usr=0.01%, sys=0.81%, ctx=377, majf=0, minf=32769 00:16:15.155 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=5.1%, 16=10.2%, 32=20.4%, >=64=59.9% 00:16:15.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.155 complete : 0=0.0%, 4=96.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=3.2% 00:16:15.155 issued rwts: total=157,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.155 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.155 job1: (groupid=0, jobs=1): err= 0: pid=3784669: Thu Nov 7 10:44:42 2024 00:16:15.155 read: IOPS=41, BW=41.2MiB/s (43.2MB/s)(579MiB/14049msec) 00:16:15.155 slat (usec): min=49, max=2111.3k, avg=17341.75, stdev=139426.51 00:16:15.155 clat (msec): min=539, max=10671, avg=2377.53, stdev=2607.03 00:16:15.155 lat (msec): min=541, max=10693, avg=2394.87, stdev=2618.15 00:16:15.155 clat percentiles (msec): 00:16:15.155 | 1.00th=[ 542], 5.00th=[ 575], 10.00th=[ 634], 20.00th=[ 726], 00:16:15.155 | 30.00th=[ 768], 40.00th=[ 802], 50.00th=[ 827], 60.00th=[ 869], 00:16:15.155 | 70.00th=[ 3775], 80.00th=[ 4111], 90.00th=[ 7617], 95.00th=[ 7752], 00:16:15.155 | 99.00th=[ 7819], 99.50th=[ 7819], 99.90th=[10671], 99.95th=[10671], 00:16:15.155 | 99.99th=[10671] 00:16:15.155 bw ( KiB/s): min= 1622, max=229376, per=3.64%, avg=102685.89, stdev=81231.28, samples=9 00:16:15.155 iops : min= 1, max= 224, avg=100.11, stdev=79.47, samples=9 00:16:15.155 lat (msec) : 750=24.18%, 1000=43.70%, 2000=0.52%, >=2000=31.61% 00:16:15.155 cpu : usr=0.00%, sys=1.05%, ctx=501, majf=0, minf=32769 00:16:15.155 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.5%, >=64=89.1% 00:16:15.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.155 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:16:15.155 issued rwts: total=579,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.155 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.155 job1: (groupid=0, jobs=1): err= 0: pid=3784670: Thu Nov 7 10:44:42 2024 00:16:15.155 read: IOPS=81, BW=81.1MiB/s (85.0MB/s)(971MiB/11980msec) 00:16:15.155 slat (usec): min=41, max=2118.5k, avg=10319.92, stdev=116543.38 00:16:15.155 clat (msec): min=233, max=10767, avg=1502.30, stdev=2080.30 00:16:15.155 lat (msec): min=235, max=10767, avg=1512.62, stdev=2094.95 00:16:15.155 clat percentiles (msec): 00:16:15.155 | 1.00th=[ 236], 5.00th=[ 236], 10.00th=[ 239], 20.00th=[ 243], 00:16:15.155 | 30.00th=[ 243], 40.00th=[ 330], 50.00th=[ 443], 60.00th=[ 493], 00:16:15.155 | 70.00th=[ 986], 80.00th=[ 2802], 90.00th=[ 4530], 95.00th=[ 6544], 00:16:15.155 | 99.00th=[ 6611], 99.50th=[10671], 99.90th=[10805], 99.95th=[10805], 00:16:15.155 | 99.99th=[10805] 00:16:15.155 bw ( KiB/s): min= 6144, max=503808, per=6.11%, avg=172299.00, stdev=188321.38, samples=10 00:16:15.155 iops : min= 6, max= 492, avg=168.20, stdev=183.95, samples=10 00:16:15.155 lat (msec) : 250=37.69%, 500=24.72%, 750=3.40%, 1000=4.84%, 2000=0.31% 00:16:15.155 lat (msec) : >=2000=29.04% 00:16:15.155 cpu : usr=0.04%, sys=1.59%, ctx=968, majf=0, minf=32769 00:16:15.155 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.3%, >=64=93.5% 00:16:15.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.155 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:15.155 issued rwts: total=971,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.155 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.155 job1: (groupid=0, jobs=1): err= 0: pid=3784671: Thu Nov 7 10:44:42 2024 00:16:15.155 read: IOPS=67, BW=67.5MiB/s (70.8MB/s)(807MiB/11960msec) 00:16:15.155 slat (usec): min=39, max=2102.7k, avg=14731.53, stdev=126892.22 00:16:15.155 clat (msec): min=67, max=7105, avg=1748.25, stdev=1999.81 00:16:15.155 lat (msec): min=513, max=7105, avg=1762.98, stdev=2005.65 00:16:15.155 clat percentiles (msec): 00:16:15.155 | 1.00th=[ 514], 5.00th=[ 531], 10.00th=[ 558], 20.00th=[ 676], 00:16:15.155 | 30.00th=[ 785], 40.00th=[ 827], 50.00th=[ 860], 60.00th=[ 902], 00:16:15.155 | 70.00th=[ 1083], 80.00th=[ 1586], 90.00th=[ 6544], 95.00th=[ 6812], 00:16:15.155 | 99.00th=[ 7013], 99.50th=[ 7013], 99.90th=[ 7080], 99.95th=[ 7080], 00:16:15.155 | 99.99th=[ 7080] 00:16:15.155 bw ( KiB/s): min= 6144, max=223232, per=4.47%, avg=126284.09, stdev=73754.59, samples=11 00:16:15.155 iops : min= 6, max= 218, avg=123.27, stdev=72.12, samples=11 00:16:15.155 lat (msec) : 100=0.12%, 750=25.40%, 1000=42.87%, 2000=11.77%, >=2000=19.83% 00:16:15.155 cpu : usr=0.04%, sys=1.23%, ctx=949, majf=0, minf=32769 00:16:15.155 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.2% 00:16:15.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.155 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:15.155 issued rwts: total=807,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.155 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.155 job1: (groupid=0, jobs=1): err= 0: pid=3784672: Thu Nov 7 10:44:42 2024 00:16:15.155 read: IOPS=4, BW=4311KiB/s (4415kB/s)(59.0MiB/14013msec) 00:16:15.155 slat (usec): min=595, max=2077.0k, avg=201398.71, stdev=587293.66 00:16:15.155 clat (msec): min=2129, max=14009, avg=9749.91, stdev=3643.52 00:16:15.155 lat (msec): min=4184, max=14012, avg=9951.31, stdev=3542.03 00:16:15.155 clat percentiles (msec): 00:16:15.156 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 6342], 00:16:15.156 | 30.00th=[ 6409], 40.00th=[ 8557], 50.00th=[10671], 60.00th=[10671], 00:16:15.156 | 70.00th=[12818], 80.00th=[14026], 90.00th=[14026], 95.00th=[14026], 00:16:15.156 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:16:15.156 | 99.99th=[14026] 00:16:15.156 lat (msec) : >=2000=100.00% 00:16:15.156 cpu : usr=0.00%, sys=0.39%, ctx=59, majf=0, minf=15105 00:16:15.156 IO depths : 1=1.7%, 2=3.4%, 4=6.8%, 8=13.6%, 16=27.1%, 32=47.5%, >=64=0.0% 00:16:15.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.156 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:15.156 issued rwts: total=59,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.156 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.156 job1: (groupid=0, jobs=1): err= 0: pid=3784673: Thu Nov 7 10:44:42 2024 00:16:15.156 read: IOPS=5, BW=5591KiB/s (5725kB/s)(77.0MiB/14103msec) 00:16:15.156 slat (usec): min=943, max=2080.1k, avg=155488.83, stdev=521465.18 00:16:15.156 clat (msec): min=2129, max=14099, avg=10871.22, stdev=3665.09 00:16:15.156 lat (msec): min=4198, max=14102, avg=11026.71, stdev=3541.23 00:16:15.156 clat percentiles (msec): 00:16:15.156 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 4279], 20.00th=[ 6409], 00:16:15.156 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[12818], 60.00th=[13892], 00:16:15.156 | 70.00th=[14026], 80.00th=[14026], 90.00th=[14026], 95.00th=[14026], 00:16:15.156 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:16:15.156 | 99.99th=[14160] 00:16:15.156 lat (msec) : >=2000=100.00% 00:16:15.156 cpu : usr=0.00%, sys=0.58%, ctx=77, majf=0, minf=19713 00:16:15.156 IO depths : 1=1.3%, 2=2.6%, 4=5.2%, 8=10.4%, 16=20.8%, 32=41.6%, >=64=18.2% 00:16:15.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.156 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:15.156 issued rwts: total=77,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.156 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.156 job1: (groupid=0, jobs=1): err= 0: pid=3784675: Thu Nov 7 10:44:42 2024 00:16:15.156 read: IOPS=38, BW=38.8MiB/s (40.6MB/s)(502MiB/12953msec) 00:16:15.156 slat (usec): min=46, max=2162.8k, avg=21498.00, stdev=179880.71 00:16:15.156 clat (msec): min=367, max=9663, avg=3170.87, stdev=3273.04 00:16:15.156 lat (msec): min=375, max=11417, avg=3192.37, stdev=3289.43 00:16:15.156 clat percentiles (msec): 00:16:15.156 | 1.00th=[ 388], 5.00th=[ 414], 10.00th=[ 447], 20.00th=[ 498], 00:16:15.156 | 30.00th=[ 514], 40.00th=[ 969], 50.00th=[ 2265], 60.00th=[ 2500], 00:16:15.156 | 70.00th=[ 2735], 80.00th=[ 8658], 90.00th=[ 8792], 95.00th=[ 8926], 00:16:15.156 | 99.00th=[ 9060], 99.50th=[ 9597], 99.90th=[ 9731], 99.95th=[ 9731], 00:16:15.156 | 99.99th=[ 9731] 00:16:15.156 bw ( KiB/s): min= 2048, max=270336, per=3.02%, avg=85333.33, stdev=97231.48, samples=9 00:16:15.156 iops : min= 2, max= 264, avg=83.33, stdev=94.95, samples=9 00:16:15.156 lat (msec) : 500=22.11%, 750=8.76%, 1000=10.16%, 2000=5.78%, >=2000=53.19% 00:16:15.156 cpu : usr=0.00%, sys=1.19%, ctx=526, majf=0, minf=32769 00:16:15.156 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.4%, >=64=87.5% 00:16:15.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.156 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:16:15.156 issued rwts: total=502,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.156 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.156 job1: (groupid=0, jobs=1): err= 0: pid=3784676: Thu Nov 7 10:44:42 2024 00:16:15.156 read: IOPS=22, BW=22.9MiB/s (24.1MB/s)(300MiB/13076msec) 00:16:15.156 slat (usec): min=163, max=2184.3k, avg=36405.44, stdev=241462.02 00:16:15.156 clat (msec): min=880, max=11852, avg=5351.80, stdev=4835.58 00:16:15.156 lat (msec): min=884, max=11861, avg=5388.21, stdev=4843.34 00:16:15.156 clat percentiles (msec): 00:16:15.156 | 1.00th=[ 885], 5.00th=[ 911], 10.00th=[ 936], 20.00th=[ 986], 00:16:15.156 | 30.00th=[ 1036], 40.00th=[ 1150], 50.00th=[ 1250], 60.00th=[ 7617], 00:16:15.156 | 70.00th=[10939], 80.00th=[11208], 90.00th=[11476], 95.00th=[11745], 00:16:15.156 | 99.00th=[11879], 99.50th=[11879], 99.90th=[11879], 99.95th=[11879], 00:16:15.156 | 99.99th=[11879] 00:16:15.156 bw ( KiB/s): min= 2048, max=114688, per=1.57%, avg=44271.50, stdev=46032.54, samples=8 00:16:15.156 iops : min= 2, max= 112, avg=43.13, stdev=44.82, samples=8 00:16:15.156 lat (msec) : 1000=21.67%, 2000=31.33%, >=2000=47.00% 00:16:15.156 cpu : usr=0.02%, sys=1.02%, ctx=469, majf=0, minf=32769 00:16:15.156 IO depths : 1=0.3%, 2=0.7%, 4=1.3%, 8=2.7%, 16=5.3%, 32=10.7%, >=64=79.0% 00:16:15.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.156 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:16:15.156 issued rwts: total=300,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.156 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.156 job1: (groupid=0, jobs=1): err= 0: pid=3784677: Thu Nov 7 10:44:42 2024 00:16:15.156 read: IOPS=14, BW=15.0MiB/s (15.7MB/s)(195MiB/13010msec) 00:16:15.156 slat (usec): min=483, max=2190.6k, avg=55654.33, stdev=302235.49 00:16:15.156 clat (msec): min=1371, max=12222, avg=7999.20, stdev=4713.67 00:16:15.156 lat (msec): min=1393, max=12236, avg=8054.85, stdev=4699.11 00:16:15.156 clat percentiles (msec): 00:16:15.156 | 1.00th=[ 1385], 5.00th=[ 1418], 10.00th=[ 1469], 20.00th=[ 1519], 00:16:15.156 | 30.00th=[ 1536], 40.00th=[10805], 50.00th=[10939], 60.00th=[11208], 00:16:15.156 | 70.00th=[11610], 80.00th=[11879], 90.00th=[12013], 95.00th=[12147], 00:16:15.156 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:16:15.156 | 99.99th=[12281] 00:16:15.156 bw ( KiB/s): min= 2048, max=61440, per=0.70%, avg=19894.86, stdev=27773.23, samples=7 00:16:15.156 iops : min= 2, max= 60, avg=19.43, stdev=27.12, samples=7 00:16:15.156 lat (msec) : 2000=32.31%, >=2000=67.69% 00:16:15.156 cpu : usr=0.02%, sys=0.86%, ctx=439, majf=0, minf=32769 00:16:15.156 IO depths : 1=0.5%, 2=1.0%, 4=2.1%, 8=4.1%, 16=8.2%, 32=16.4%, >=64=67.7% 00:16:15.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.156 complete : 0=0.0%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.4% 00:16:15.156 issued rwts: total=195,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.156 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.156 job1: (groupid=0, jobs=1): err= 0: pid=3784678: Thu Nov 7 10:44:42 2024 00:16:15.156 read: IOPS=4, BW=4604KiB/s (4714kB/s)(54.0MiB/12011msec) 00:16:15.156 slat (usec): min=970, max=2119.1k, avg=221027.26, stdev=617101.11 00:16:15.156 clat (msec): min=74, max=12008, avg=7343.41, stdev=3951.28 00:16:15.156 lat (msec): min=2122, max=12010, avg=7564.44, stdev=3869.99 00:16:15.156 clat percentiles (msec): 00:16:15.156 | 1.00th=[ 74], 5.00th=[ 2140], 10.00th=[ 2165], 20.00th=[ 2198], 00:16:15.156 | 30.00th=[ 4329], 40.00th=[ 6409], 50.00th=[ 6477], 60.00th=[ 8658], 00:16:15.156 | 70.00th=[11879], 80.00th=[12013], 90.00th=[12013], 95.00th=[12013], 00:16:15.156 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:16:15.156 | 99.99th=[12013] 00:16:15.156 lat (msec) : 100=1.85%, >=2000=98.15% 00:16:15.156 cpu : usr=0.01%, sys=0.47%, ctx=68, majf=0, minf=13825 00:16:15.156 IO depths : 1=1.9%, 2=3.7%, 4=7.4%, 8=14.8%, 16=29.6%, 32=42.6%, >=64=0.0% 00:16:15.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.156 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:15.156 issued rwts: total=54,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.156 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.156 job1: (groupid=0, jobs=1): err= 0: pid=3784679: Thu Nov 7 10:44:42 2024 00:16:15.156 read: IOPS=99, BW=99.4MiB/s (104MB/s)(1286MiB/12939msec) 00:16:15.156 slat (usec): min=47, max=2154.8k, avg=8380.03, stdev=100475.36 00:16:15.156 clat (msec): min=226, max=6674, avg=1234.58, stdev=1628.83 00:16:15.156 lat (msec): min=227, max=7450, avg=1242.96, stdev=1638.06 00:16:15.156 clat percentiles (msec): 00:16:15.156 | 1.00th=[ 228], 5.00th=[ 230], 10.00th=[ 232], 20.00th=[ 234], 00:16:15.156 | 30.00th=[ 243], 40.00th=[ 347], 50.00th=[ 414], 60.00th=[ 642], 00:16:15.156 | 70.00th=[ 818], 80.00th=[ 2366], 90.00th=[ 4530], 95.00th=[ 4597], 00:16:15.156 | 99.00th=[ 4597], 99.50th=[ 4597], 99.90th=[ 6678], 99.95th=[ 6678], 00:16:15.156 | 99.99th=[ 6678] 00:16:15.156 bw ( KiB/s): min= 2048, max=555008, per=7.64%, avg=215727.82, stdev=178920.09, samples=11 00:16:15.156 iops : min= 2, max= 542, avg=210.64, stdev=174.71, samples=11 00:16:15.156 lat (msec) : 250=32.43%, 500=24.65%, 750=7.08%, 1000=15.47%, >=2000=20.37% 00:16:15.156 cpu : usr=0.03%, sys=1.34%, ctx=1337, majf=0, minf=32769 00:16:15.156 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.5%, >=64=95.1% 00:16:15.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.156 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:15.156 issued rwts: total=1286,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.156 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.156 job2: (groupid=0, jobs=1): err= 0: pid=3784692: Thu Nov 7 10:44:42 2024 00:16:15.156 read: IOPS=2, BW=2457KiB/s (2516kB/s)(31.0MiB/12920msec) 00:16:15.156 slat (usec): min=968, max=2124.4k, avg=347351.44, stdev=779191.48 00:16:15.156 clat (msec): min=2151, max=12916, avg=10569.67, stdev=3433.79 00:16:15.156 lat (msec): min=4220, max=12919, avg=10917.02, stdev=3080.32 00:16:15.156 clat percentiles (msec): 00:16:15.156 | 1.00th=[ 2165], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 6409], 00:16:15.156 | 30.00th=[10671], 40.00th=[12818], 50.00th=[12818], 60.00th=[12953], 00:16:15.156 | 70.00th=[12953], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:16:15.156 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:16:15.156 | 99.99th=[12953] 00:16:15.156 lat (msec) : >=2000=100.00% 00:16:15.156 cpu : usr=0.00%, sys=0.24%, ctx=43, majf=0, minf=7937 00:16:15.156 IO depths : 1=3.2%, 2=6.5%, 4=12.9%, 8=25.8%, 16=51.6%, 32=0.0%, >=64=0.0% 00:16:15.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.156 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:16:15.156 issued rwts: total=31,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.156 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.156 job2: (groupid=0, jobs=1): err= 0: pid=3784693: Thu Nov 7 10:44:42 2024 00:16:15.156 read: IOPS=16, BW=16.7MiB/s (17.5MB/s)(233MiB/13981msec) 00:16:15.156 slat (usec): min=62, max=2093.6k, avg=50901.22, stdev=298095.86 00:16:15.156 clat (msec): min=734, max=13377, avg=7414.76, stdev=5738.61 00:16:15.156 lat (msec): min=738, max=13378, avg=7465.66, stdev=5738.45 00:16:15.156 clat percentiles (msec): 00:16:15.156 | 1.00th=[ 735], 5.00th=[ 743], 10.00th=[ 760], 20.00th=[ 768], 00:16:15.156 | 30.00th=[ 785], 40.00th=[ 2802], 50.00th=[ 9194], 60.00th=[12818], 00:16:15.156 | 70.00th=[12953], 80.00th=[13087], 90.00th=[13221], 95.00th=[13355], 00:16:15.157 | 99.00th=[13355], 99.50th=[13355], 99.90th=[13355], 99.95th=[13355], 00:16:15.157 | 99.99th=[13355] 00:16:15.157 bw ( KiB/s): min= 2052, max=102400, per=1.10%, avg=30909.29, stdev=41767.24, samples=7 00:16:15.157 iops : min= 2, max= 100, avg=30.14, stdev=40.82, samples=7 00:16:15.157 lat (msec) : 750=6.44%, 1000=31.76%, >=2000=61.80% 00:16:15.157 cpu : usr=0.03%, sys=0.98%, ctx=213, majf=0, minf=32769 00:16:15.157 IO depths : 1=0.4%, 2=0.9%, 4=1.7%, 8=3.4%, 16=6.9%, 32=13.7%, >=64=73.0% 00:16:15.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.157 complete : 0=0.0%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.9% 00:16:15.157 issued rwts: total=233,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.157 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.157 job2: (groupid=0, jobs=1): err= 0: pid=3784694: Thu Nov 7 10:44:42 2024 00:16:15.157 read: IOPS=126, BW=127MiB/s (133MB/s)(1780MiB/14070msec) 00:16:15.157 slat (usec): min=51, max=2072.8k, avg=6696.91, stdev=84945.05 00:16:15.157 clat (msec): min=199, max=10130, avg=978.18, stdev=1659.36 00:16:15.157 lat (msec): min=200, max=10131, avg=984.88, stdev=1670.86 00:16:15.157 clat percentiles (msec): 00:16:15.157 | 1.00th=[ 220], 5.00th=[ 224], 10.00th=[ 226], 20.00th=[ 236], 00:16:15.157 | 30.00th=[ 253], 40.00th=[ 330], 50.00th=[ 380], 60.00th=[ 388], 00:16:15.157 | 70.00th=[ 447], 80.00th=[ 634], 90.00th=[ 2970], 95.00th=[ 6409], 00:16:15.157 | 99.00th=[ 6544], 99.50th=[ 6611], 99.90th=[ 9597], 99.95th=[10134], 00:16:15.157 | 99.99th=[10134] 00:16:15.157 bw ( KiB/s): min= 2048, max=563200, per=8.56%, avg=241707.21, stdev=188208.39, samples=14 00:16:15.157 iops : min= 2, max= 550, avg=236.00, stdev=183.85, samples=14 00:16:15.157 lat (msec) : 250=25.28%, 500=48.43%, 750=11.35%, 1000=0.22%, >=2000=14.72% 00:16:15.157 cpu : usr=0.04%, sys=1.71%, ctx=1607, majf=0, minf=32769 00:16:15.157 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.8%, >=64=96.5% 00:16:15.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.157 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:15.157 issued rwts: total=1780,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.157 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.157 job2: (groupid=0, jobs=1): err= 0: pid=3784695: Thu Nov 7 10:44:42 2024 00:16:15.157 read: IOPS=34, BW=34.9MiB/s (36.6MB/s)(448MiB/12841msec) 00:16:15.157 slat (usec): min=53, max=2105.6k, avg=23863.10, stdev=195054.60 00:16:15.157 clat (msec): min=509, max=11164, avg=3554.89, stdev=4343.22 00:16:15.157 lat (msec): min=510, max=11164, avg=3578.75, stdev=4355.57 00:16:15.157 clat percentiles (msec): 00:16:15.157 | 1.00th=[ 514], 5.00th=[ 542], 10.00th=[ 567], 20.00th=[ 592], 00:16:15.157 | 30.00th=[ 625], 40.00th=[ 693], 50.00th=[ 760], 60.00th=[ 810], 00:16:15.157 | 70.00th=[ 4866], 80.00th=[10671], 90.00th=[10939], 95.00th=[11073], 00:16:15.157 | 99.00th=[11208], 99.50th=[11208], 99.90th=[11208], 99.95th=[11208], 00:16:15.157 | 99.99th=[11208] 00:16:15.157 bw ( KiB/s): min= 2048, max=200704, per=2.59%, avg=73045.33, stdev=80486.66, samples=9 00:16:15.157 iops : min= 2, max= 196, avg=71.33, stdev=78.60, samples=9 00:16:15.157 lat (msec) : 750=46.88%, 1000=19.64%, >=2000=33.48% 00:16:15.157 cpu : usr=0.00%, sys=1.01%, ctx=335, majf=0, minf=32769 00:16:15.157 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.8%, 16=3.6%, 32=7.1%, >=64=85.9% 00:16:15.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.157 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:16:15.157 issued rwts: total=448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.157 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.157 job2: (groupid=0, jobs=1): err= 0: pid=3784696: Thu Nov 7 10:44:42 2024 00:16:15.157 read: IOPS=5, BW=5573KiB/s (5707kB/s)(71.0MiB/13045msec) 00:16:15.157 slat (usec): min=840, max=2124.6k, avg=153478.19, stdev=534259.78 00:16:15.157 clat (msec): min=2147, max=13041, avg=11472.48, stdev=2934.05 00:16:15.157 lat (msec): min=4186, max=13044, avg=11625.96, stdev=2716.21 00:16:15.157 clat percentiles (msec): 00:16:15.157 | 1.00th=[ 2140], 5.00th=[ 4212], 10.00th=[ 6342], 20.00th=[10671], 00:16:15.157 | 30.00th=[12818], 40.00th=[12953], 50.00th=[12953], 60.00th=[12953], 00:16:15.157 | 70.00th=[12953], 80.00th=[13087], 90.00th=[13087], 95.00th=[13087], 00:16:15.157 | 99.00th=[13087], 99.50th=[13087], 99.90th=[13087], 99.95th=[13087], 00:16:15.157 | 99.99th=[13087] 00:16:15.157 lat (msec) : >=2000=100.00% 00:16:15.157 cpu : usr=0.00%, sys=0.57%, ctx=92, majf=0, minf=18177 00:16:15.157 IO depths : 1=1.4%, 2=2.8%, 4=5.6%, 8=11.3%, 16=22.5%, 32=45.1%, >=64=11.3% 00:16:15.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.157 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:15.157 issued rwts: total=71,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.157 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.157 job2: (groupid=0, jobs=1): err= 0: pid=3784697: Thu Nov 7 10:44:42 2024 00:16:15.157 read: IOPS=5, BW=5142KiB/s (5266kB/s)(60.0MiB/11948msec) 00:16:15.157 slat (usec): min=779, max=2067.5k, avg=197586.15, stdev=581456.27 00:16:15.157 clat (msec): min=92, max=11939, avg=6132.70, stdev=3396.46 00:16:15.157 lat (msec): min=2130, max=11947, avg=6330.28, stdev=3383.91 00:16:15.157 clat percentiles (msec): 00:16:15.157 | 1.00th=[ 92], 5.00th=[ 2140], 10.00th=[ 2165], 20.00th=[ 2198], 00:16:15.157 | 30.00th=[ 4279], 40.00th=[ 4329], 50.00th=[ 6409], 60.00th=[ 6477], 00:16:15.157 | 70.00th=[ 8658], 80.00th=[ 8658], 90.00th=[11879], 95.00th=[11879], 00:16:15.157 | 99.00th=[11879], 99.50th=[11879], 99.90th=[11879], 99.95th=[11879], 00:16:15.157 | 99.99th=[11879] 00:16:15.157 lat (msec) : 100=1.67%, >=2000=98.33% 00:16:15.157 cpu : usr=0.00%, sys=0.49%, ctx=69, majf=0, minf=15361 00:16:15.157 IO depths : 1=1.7%, 2=3.3%, 4=6.7%, 8=13.3%, 16=26.7%, 32=48.3%, >=64=0.0% 00:16:15.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.157 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:15.157 issued rwts: total=60,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.157 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.157 job2: (groupid=0, jobs=1): err= 0: pid=3784698: Thu Nov 7 10:44:42 2024 00:16:15.157 read: IOPS=3, BW=4011KiB/s (4107kB/s)(51.0MiB/13020msec) 00:16:15.157 slat (usec): min=979, max=2156.7k, avg=213211.78, stdev=628935.36 00:16:15.157 clat (msec): min=2145, max=13018, avg=11193.29, stdev=3150.57 00:16:15.157 lat (msec): min=4193, max=13019, avg=11406.50, stdev=2882.55 00:16:15.157 clat percentiles (msec): 00:16:15.157 | 1.00th=[ 2140], 5.00th=[ 4245], 10.00th=[ 6342], 20.00th=[ 8490], 00:16:15.157 | 30.00th=[12818], 40.00th=[12953], 50.00th=[12953], 60.00th=[12953], 00:16:15.157 | 70.00th=[12953], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:16:15.157 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:16:15.157 | 99.99th=[12953] 00:16:15.157 lat (msec) : >=2000=100.00% 00:16:15.157 cpu : usr=0.00%, sys=0.41%, ctx=78, majf=0, minf=13057 00:16:15.157 IO depths : 1=2.0%, 2=3.9%, 4=7.8%, 8=15.7%, 16=31.4%, 32=39.2%, >=64=0.0% 00:16:15.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.157 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:15.157 issued rwts: total=51,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.157 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.157 job2: (groupid=0, jobs=1): err= 0: pid=3784699: Thu Nov 7 10:44:42 2024 00:16:15.157 read: IOPS=3, BW=3279KiB/s (3358kB/s)(45.0MiB/14051msec) 00:16:15.157 slat (usec): min=627, max=2084.7k, avg=265003.16, stdev=664968.11 00:16:15.157 clat (msec): min=2124, max=14046, avg=11292.92, stdev=3599.99 00:16:15.157 lat (msec): min=4178, max=14050, avg=11557.93, stdev=3339.25 00:16:15.157 clat percentiles (msec): 00:16:15.157 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 4279], 20.00th=[ 8490], 00:16:15.157 | 30.00th=[10671], 40.00th=[12818], 50.00th=[12818], 60.00th=[13892], 00:16:15.157 | 70.00th=[14026], 80.00th=[14026], 90.00th=[14026], 95.00th=[14026], 00:16:15.157 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:16:15.157 | 99.99th=[14026] 00:16:15.157 lat (msec) : >=2000=100.00% 00:16:15.157 cpu : usr=0.00%, sys=0.27%, ctx=64, majf=0, minf=11521 00:16:15.157 IO depths : 1=2.2%, 2=4.4%, 4=8.9%, 8=17.8%, 16=35.6%, 32=31.1%, >=64=0.0% 00:16:15.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.157 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:15.157 issued rwts: total=45,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.157 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.157 job2: (groupid=0, jobs=1): err= 0: pid=3784700: Thu Nov 7 10:44:42 2024 00:16:15.157 read: IOPS=1, BW=1431KiB/s (1465kB/s)(18.0MiB/12880msec) 00:16:15.157 slat (msec): min=7, max=2127, avg=596.00, stdev=887.03 00:16:15.157 clat (msec): min=2151, max=12865, avg=7868.00, stdev=3355.99 00:16:15.157 lat (msec): min=4199, max=12879, avg=8464.01, stdev=3231.32 00:16:15.157 clat percentiles (msec): 00:16:15.157 | 1.00th=[ 2165], 5.00th=[ 2165], 10.00th=[ 4212], 20.00th=[ 4212], 00:16:15.157 | 30.00th=[ 6342], 40.00th=[ 6409], 50.00th=[ 6409], 60.00th=[ 8490], 00:16:15.157 | 70.00th=[10671], 80.00th=[10671], 90.00th=[12818], 95.00th=[12818], 00:16:15.157 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:16:15.157 | 99.99th=[12818] 00:16:15.157 lat (msec) : >=2000=100.00% 00:16:15.157 cpu : usr=0.00%, sys=0.12%, ctx=37, majf=0, minf=4609 00:16:15.157 IO depths : 1=5.6%, 2=11.1%, 4=22.2%, 8=44.4%, 16=16.7%, 32=0.0%, >=64=0.0% 00:16:15.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.157 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:16:15.157 issued rwts: total=18,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.157 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.157 job2: (groupid=0, jobs=1): err= 0: pid=3784701: Thu Nov 7 10:44:42 2024 00:16:15.157 read: IOPS=1, BW=1102KiB/s (1128kB/s)(15.0MiB/13941msec) 00:16:15.157 slat (msec): min=3, max=2174, avg=787.48, stdev=989.63 00:16:15.157 clat (msec): min=2128, max=13936, avg=8384.37, stdev=4068.73 00:16:15.157 lat (msec): min=4189, max=13940, avg=9171.85, stdev=3911.52 00:16:15.157 clat percentiles (msec): 00:16:15.157 | 1.00th=[ 2123], 5.00th=[ 2123], 10.00th=[ 4178], 20.00th=[ 4212], 00:16:15.157 | 30.00th=[ 4279], 40.00th=[ 6342], 50.00th=[ 8557], 60.00th=[10671], 00:16:15.157 | 70.00th=[10671], 80.00th=[12818], 90.00th=[13892], 95.00th=[13892], 00:16:15.157 | 99.00th=[13892], 99.50th=[13892], 99.90th=[13892], 99.95th=[13892], 00:16:15.157 | 99.99th=[13892] 00:16:15.157 lat (msec) : >=2000=100.00% 00:16:15.157 cpu : usr=0.00%, sys=0.09%, ctx=48, majf=0, minf=3841 00:16:15.157 IO depths : 1=6.7%, 2=13.3%, 4=26.7%, 8=53.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:15.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.158 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.158 issued rwts: total=15,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.158 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.158 job2: (groupid=0, jobs=1): err= 0: pid=3784702: Thu Nov 7 10:44:42 2024 00:16:15.158 read: IOPS=3, BW=3731KiB/s (3820kB/s)(47.0MiB/12901msec) 00:16:15.158 slat (usec): min=703, max=2109.8k, avg=228699.93, stdev=644826.08 00:16:15.158 clat (msec): min=2151, max=12899, avg=9315.78, stdev=3523.01 00:16:15.158 lat (msec): min=4197, max=12900, avg=9544.48, stdev=3394.36 00:16:15.158 clat percentiles (msec): 00:16:15.158 | 1.00th=[ 2165], 5.00th=[ 4212], 10.00th=[ 4212], 20.00th=[ 6342], 00:16:15.158 | 30.00th=[ 6409], 40.00th=[ 8490], 50.00th=[10671], 60.00th=[10671], 00:16:15.158 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12953], 95.00th=[12953], 00:16:15.158 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:16:15.158 | 99.99th=[12953] 00:16:15.158 lat (msec) : >=2000=100.00% 00:16:15.158 cpu : usr=0.00%, sys=0.36%, ctx=45, majf=0, minf=12033 00:16:15.158 IO depths : 1=2.1%, 2=4.3%, 4=8.5%, 8=17.0%, 16=34.0%, 32=34.0%, >=64=0.0% 00:16:15.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.158 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:15.158 issued rwts: total=47,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.158 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.158 job2: (groupid=0, jobs=1): err= 0: pid=3784703: Thu Nov 7 10:44:42 2024 00:16:15.158 read: IOPS=1, BW=1543KiB/s (1580kB/s)(21.0MiB/13934msec) 00:16:15.158 slat (msec): min=8, max=2103, avg=562.34, stdev=894.32 00:16:15.158 clat (msec): min=2123, max=13866, avg=7345.81, stdev=3029.85 00:16:15.158 lat (msec): min=4178, max=13933, avg=7908.15, stdev=3107.12 00:16:15.158 clat percentiles (msec): 00:16:15.158 | 1.00th=[ 2123], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 4245], 00:16:15.158 | 30.00th=[ 6342], 40.00th=[ 6342], 50.00th=[ 6409], 60.00th=[ 8490], 00:16:15.158 | 70.00th=[ 8557], 80.00th=[ 8557], 90.00th=[10671], 95.00th=[12818], 00:16:15.158 | 99.00th=[13892], 99.50th=[13892], 99.90th=[13892], 99.95th=[13892], 00:16:15.158 | 99.99th=[13892] 00:16:15.158 lat (msec) : >=2000=100.00% 00:16:15.158 cpu : usr=0.01%, sys=0.11%, ctx=57, majf=0, minf=5377 00:16:15.158 IO depths : 1=4.8%, 2=9.5%, 4=19.0%, 8=38.1%, 16=28.6%, 32=0.0%, >=64=0.0% 00:16:15.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.158 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:16:15.158 issued rwts: total=21,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.158 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.158 job2: (groupid=0, jobs=1): err= 0: pid=3784705: Thu Nov 7 10:44:42 2024 00:16:15.158 read: IOPS=3, BW=3636KiB/s (3724kB/s)(50.0MiB/14080msec) 00:16:15.158 slat (usec): min=996, max=2099.7k, avg=238788.27, stdev=637440.14 00:16:15.158 clat (msec): min=2139, max=14078, avg=11300.46, stdev=3740.05 00:16:15.158 lat (msec): min=4187, max=14079, avg=11539.25, stdev=3517.79 00:16:15.158 clat percentiles (msec): 00:16:15.158 | 1.00th=[ 2140], 5.00th=[ 4212], 10.00th=[ 4279], 20.00th=[ 6409], 00:16:15.158 | 30.00th=[ 8557], 40.00th=[12818], 50.00th=[13892], 60.00th=[14026], 00:16:15.158 | 70.00th=[14026], 80.00th=[14026], 90.00th=[14026], 95.00th=[14026], 00:16:15.158 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:16:15.158 | 99.99th=[14026] 00:16:15.158 lat (msec) : >=2000=100.00% 00:16:15.158 cpu : usr=0.01%, sys=0.39%, ctx=86, majf=0, minf=12801 00:16:15.158 IO depths : 1=2.0%, 2=4.0%, 4=8.0%, 8=16.0%, 16=32.0%, 32=38.0%, >=64=0.0% 00:16:15.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.158 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:15.158 issued rwts: total=50,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.158 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.158 job3: (groupid=0, jobs=1): err= 0: pid=3784712: Thu Nov 7 10:44:42 2024 00:16:15.158 read: IOPS=7, BW=7377KiB/s (7554kB/s)(94.0MiB/13049msec) 00:16:15.158 slat (usec): min=635, max=2112.7k, avg=115972.24, stdev=438899.39 00:16:15.158 clat (msec): min=2146, max=13047, avg=11558.46, stdev=2730.08 00:16:15.158 lat (msec): min=4179, max=13048, avg=11674.43, stdev=2551.69 00:16:15.158 clat percentiles (msec): 00:16:15.158 | 1.00th=[ 2140], 5.00th=[ 4245], 10.00th=[ 6409], 20.00th=[10671], 00:16:15.158 | 30.00th=[11745], 40.00th=[12818], 50.00th=[12953], 60.00th=[12953], 00:16:15.158 | 70.00th=[12953], 80.00th=[13087], 90.00th=[13087], 95.00th=[13087], 00:16:15.158 | 99.00th=[13087], 99.50th=[13087], 99.90th=[13087], 99.95th=[13087], 00:16:15.158 | 99.99th=[13087] 00:16:15.158 lat (msec) : >=2000=100.00% 00:16:15.158 cpu : usr=0.00%, sys=0.65%, ctx=136, majf=0, minf=24065 00:16:15.158 IO depths : 1=1.1%, 2=2.1%, 4=4.3%, 8=8.5%, 16=17.0%, 32=34.0%, >=64=33.0% 00:16:15.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.158 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:15.158 issued rwts: total=94,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.158 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.158 job3: (groupid=0, jobs=1): err= 0: pid=3784713: Thu Nov 7 10:44:42 2024 00:16:15.158 read: IOPS=2, BW=2344KiB/s (2400kB/s)(32.0MiB/13979msec) 00:16:15.158 slat (usec): min=1183, max=2073.2k, avg=369877.42, stdev=762929.28 00:16:15.158 clat (msec): min=2141, max=13975, avg=8663.61, stdev=3559.16 00:16:15.158 lat (msec): min=4176, max=13978, avg=9033.49, stdev=3473.54 00:16:15.158 clat percentiles (msec): 00:16:15.158 | 1.00th=[ 2140], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 4279], 00:16:15.158 | 30.00th=[ 6342], 40.00th=[ 6409], 50.00th=[ 8490], 60.00th=[10671], 00:16:15.158 | 70.00th=[10671], 80.00th=[12818], 90.00th=[14026], 95.00th=[14026], 00:16:15.158 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:16:15.158 | 99.99th=[14026] 00:16:15.158 lat (msec) : >=2000=100.00% 00:16:15.158 cpu : usr=0.00%, sys=0.23%, ctx=61, majf=0, minf=8193 00:16:15.158 IO depths : 1=3.1%, 2=6.2%, 4=12.5%, 8=25.0%, 16=50.0%, 32=3.1%, >=64=0.0% 00:16:15.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.158 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:16:15.158 issued rwts: total=32,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.158 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.158 job3: (groupid=0, jobs=1): err= 0: pid=3784714: Thu Nov 7 10:44:42 2024 00:16:15.158 read: IOPS=70, BW=70.5MiB/s (73.9MB/s)(994MiB/14107msec) 00:16:15.158 slat (usec): min=48, max=2042.5k, avg=12035.50, stdev=97188.63 00:16:15.158 clat (msec): min=241, max=9662, avg=1761.73, stdev=2355.57 00:16:15.158 lat (msec): min=242, max=9672, avg=1773.77, stdev=2369.15 00:16:15.158 clat percentiles (msec): 00:16:15.158 | 1.00th=[ 243], 5.00th=[ 245], 10.00th=[ 247], 20.00th=[ 275], 00:16:15.158 | 30.00th=[ 351], 40.00th=[ 472], 50.00th=[ 709], 60.00th=[ 768], 00:16:15.158 | 70.00th=[ 1183], 80.00th=[ 4329], 90.00th=[ 5738], 95.00th=[ 6409], 00:16:15.158 | 99.00th=[ 9463], 99.50th=[ 9597], 99.90th=[ 9597], 99.95th=[ 9597], 00:16:15.158 | 99.99th=[ 9597] 00:16:15.158 bw ( KiB/s): min= 2052, max=522240, per=3.93%, avg=110976.25, stdev=141305.30, samples=16 00:16:15.158 iops : min= 2, max= 510, avg=108.38, stdev=137.99, samples=16 00:16:15.159 lat (msec) : 250=12.37%, 500=29.28%, 750=14.49%, 1000=12.27%, 2000=7.75% 00:16:15.159 lat (msec) : >=2000=23.84% 00:16:15.159 cpu : usr=0.04%, sys=1.52%, ctx=1340, majf=0, minf=32769 00:16:15.159 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.7% 00:16:15.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.159 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:15.159 issued rwts: total=994,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.159 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.159 job3: (groupid=0, jobs=1): err= 0: pid=3784715: Thu Nov 7 10:44:42 2024 00:16:15.159 read: IOPS=4, BW=4722KiB/s (4835kB/s)(60.0MiB/13012msec) 00:16:15.159 slat (usec): min=937, max=2105.3k, avg=181047.90, stdev=544849.08 00:16:15.159 clat (msec): min=2147, max=13009, avg=10791.84, stdev=3107.83 00:16:15.159 lat (msec): min=4191, max=13011, avg=10972.89, stdev=2905.57 00:16:15.159 clat percentiles (msec): 00:16:15.159 | 1.00th=[ 2165], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 8490], 00:16:15.159 | 30.00th=[10671], 40.00th=[11745], 50.00th=[12818], 60.00th=[12953], 00:16:15.159 | 70.00th=[12953], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:16:15.159 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:16:15.159 | 99.99th=[12953] 00:16:15.159 lat (msec) : >=2000=100.00% 00:16:15.159 cpu : usr=0.00%, sys=0.48%, ctx=86, majf=0, minf=15361 00:16:15.159 IO depths : 1=1.7%, 2=3.3%, 4=6.7%, 8=13.3%, 16=26.7%, 32=48.3%, >=64=0.0% 00:16:15.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.159 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:15.159 issued rwts: total=60,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.159 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.159 job3: (groupid=0, jobs=1): err= 0: pid=3784716: Thu Nov 7 10:44:42 2024 00:16:15.159 read: IOPS=169, BW=169MiB/s (178MB/s)(1706MiB/10074msec) 00:16:15.159 slat (usec): min=40, max=1007.1k, avg=5859.59, stdev=26577.59 00:16:15.159 clat (msec): min=64, max=2048, avg=645.89, stdev=223.09 00:16:15.159 lat (msec): min=76, max=2050, avg=651.75, stdev=226.39 00:16:15.159 clat percentiles (msec): 00:16:15.159 | 1.00th=[ 188], 5.00th=[ 351], 10.00th=[ 368], 20.00th=[ 430], 00:16:15.159 | 30.00th=[ 523], 40.00th=[ 592], 50.00th=[ 667], 60.00th=[ 726], 00:16:15.159 | 70.00th=[ 760], 80.00th=[ 818], 90.00th=[ 885], 95.00th=[ 911], 00:16:15.159 | 99.00th=[ 986], 99.50th=[ 1989], 99.90th=[ 2005], 99.95th=[ 2056], 00:16:15.159 | 99.99th=[ 2056] 00:16:15.159 bw ( KiB/s): min= 6144, max=370688, per=6.74%, avg=190336.00, stdev=78580.13, samples=16 00:16:15.159 iops : min= 6, max= 362, avg=185.87, stdev=76.74, samples=16 00:16:15.159 lat (msec) : 100=0.82%, 250=0.88%, 500=24.56%, 750=40.91%, 1000=32.06% 00:16:15.159 lat (msec) : 2000=0.70%, >=2000=0.06% 00:16:15.159 cpu : usr=0.12%, sys=2.82%, ctx=1486, majf=0, minf=32769 00:16:15.159 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.9%, >=64=96.3% 00:16:15.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.159 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:15.159 issued rwts: total=1706,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.159 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.159 job3: (groupid=0, jobs=1): err= 0: pid=3784717: Thu Nov 7 10:44:42 2024 00:16:15.159 read: IOPS=1, BW=1396KiB/s (1430kB/s)(19.0MiB/13936msec) 00:16:15.159 slat (msec): min=5, max=2108, avg=620.75, stdev=930.37 00:16:15.159 clat (msec): min=2141, max=13927, avg=9372.23, stdev=3899.05 00:16:15.159 lat (msec): min=4218, max=13935, avg=9992.98, stdev=3612.15 00:16:15.159 clat percentiles (msec): 00:16:15.159 | 1.00th=[ 2140], 5.00th=[ 2140], 10.00th=[ 4212], 20.00th=[ 6342], 00:16:15.159 | 30.00th=[ 6342], 40.00th=[ 8490], 50.00th=[ 8557], 60.00th=[10671], 00:16:15.159 | 70.00th=[12818], 80.00th=[13892], 90.00th=[13892], 95.00th=[13892], 00:16:15.159 | 99.00th=[13892], 99.50th=[13892], 99.90th=[13892], 99.95th=[13892], 00:16:15.159 | 99.99th=[13892] 00:16:15.159 lat (msec) : >=2000=100.00% 00:16:15.159 cpu : usr=0.00%, sys=0.13%, ctx=67, majf=0, minf=4865 00:16:15.159 IO depths : 1=5.3%, 2=10.5%, 4=21.1%, 8=42.1%, 16=21.1%, 32=0.0%, >=64=0.0% 00:16:15.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.159 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:16:15.159 issued rwts: total=19,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.159 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.159 job3: (groupid=0, jobs=1): err= 0: pid=3784718: Thu Nov 7 10:44:42 2024 00:16:15.159 read: IOPS=3, BW=3712KiB/s (3801kB/s)(51.0MiB/14069msec) 00:16:15.159 slat (usec): min=1000, max=2100.7k, avg=233841.13, stdev=625375.35 00:16:15.159 clat (msec): min=2142, max=14065, avg=11108.22, stdev=3823.67 00:16:15.159 lat (msec): min=4164, max=14068, avg=11342.06, stdev=3623.82 00:16:15.159 clat percentiles (msec): 00:16:15.159 | 1.00th=[ 2140], 5.00th=[ 4178], 10.00th=[ 4279], 20.00th=[ 6409], 00:16:15.159 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[14026], 60.00th=[14026], 00:16:15.159 | 70.00th=[14026], 80.00th=[14026], 90.00th=[14026], 95.00th=[14026], 00:16:15.159 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:16:15.159 | 99.99th=[14026] 00:16:15.159 lat (msec) : >=2000=100.00% 00:16:15.159 cpu : usr=0.01%, sys=0.38%, ctx=79, majf=0, minf=13057 00:16:15.159 IO depths : 1=2.0%, 2=3.9%, 4=7.8%, 8=15.7%, 16=31.4%, 32=39.2%, >=64=0.0% 00:16:15.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.159 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:15.159 issued rwts: total=51,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.159 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.159 job3: (groupid=0, jobs=1): err= 0: pid=3784719: Thu Nov 7 10:44:42 2024 00:16:15.159 read: IOPS=10, BW=10.8MiB/s (11.3MB/s)(108MiB/10036msec) 00:16:15.159 slat (usec): min=572, max=2117.4k, avg=92597.42, stdev=367074.34 00:16:15.159 clat (msec): min=34, max=9946, avg=1558.81, stdev=1971.70 00:16:15.159 lat (msec): min=35, max=10035, avg=1651.41, stdev=2128.09 00:16:15.159 clat percentiles (msec): 00:16:15.159 | 1.00th=[ 36], 5.00th=[ 42], 10.00th=[ 46], 20.00th=[ 144], 00:16:15.159 | 30.00th=[ 234], 40.00th=[ 338], 50.00th=[ 1368], 60.00th=[ 1536], 00:16:15.159 | 70.00th=[ 1888], 80.00th=[ 2333], 90.00th=[ 2500], 95.00th=[ 6745], 00:16:15.159 | 99.00th=[ 8926], 99.50th=[10000], 99.90th=[10000], 99.95th=[10000], 00:16:15.159 | 99.99th=[10000] 00:16:15.159 lat (msec) : 50=12.96%, 100=1.85%, 250=15.74%, 500=12.04%, 2000=32.41% 00:16:15.159 lat (msec) : >=2000=25.00% 00:16:15.159 cpu : usr=0.01%, sys=0.64%, ctx=283, majf=0, minf=27649 00:16:15.159 IO depths : 1=0.9%, 2=1.9%, 4=3.7%, 8=7.4%, 16=14.8%, 32=29.6%, >=64=41.7% 00:16:15.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.159 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:15.159 issued rwts: total=108,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.159 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.159 job3: (groupid=0, jobs=1): err= 0: pid=3784720: Thu Nov 7 10:44:42 2024 00:16:15.159 read: IOPS=30, BW=30.7MiB/s (32.2MB/s)(367MiB/11969msec) 00:16:15.159 slat (usec): min=44, max=2070.8k, avg=27365.44, stdev=183069.12 00:16:15.159 clat (msec): min=582, max=8849, avg=3145.71, stdev=2841.22 00:16:15.159 lat (msec): min=583, max=8862, avg=3173.08, stdev=2858.59 00:16:15.159 clat percentiles (msec): 00:16:15.159 | 1.00th=[ 600], 5.00th=[ 651], 10.00th=[ 667], 20.00th=[ 743], 00:16:15.159 | 30.00th=[ 1083], 40.00th=[ 1351], 50.00th=[ 2265], 60.00th=[ 2400], 00:16:15.159 | 70.00th=[ 2567], 80.00th=[ 7416], 90.00th=[ 8288], 95.00th=[ 8557], 00:16:15.159 | 99.00th=[ 8792], 99.50th=[ 8792], 99.90th=[ 8792], 99.95th=[ 8792], 00:16:15.159 | 99.99th=[ 8792] 00:16:15.159 bw ( KiB/s): min= 1442, max=188416, per=3.48%, avg=98182.80, stdev=77024.16, samples=5 00:16:15.159 iops : min= 1, max= 184, avg=95.80, stdev=75.35, samples=5 00:16:15.159 lat (msec) : 750=20.71%, 1000=7.63%, 2000=12.81%, >=2000=58.86% 00:16:15.159 cpu : usr=0.01%, sys=0.96%, ctx=520, majf=0, minf=32769 00:16:15.159 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.2%, 16=4.4%, 32=8.7%, >=64=82.8% 00:16:15.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.159 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:16:15.159 issued rwts: total=367,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.159 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.159 job3: (groupid=0, jobs=1): err= 0: pid=3784721: Thu Nov 7 10:44:42 2024 00:16:15.159 read: IOPS=2, BW=2353KiB/s (2409kB/s)(25.0MiB/10881msec) 00:16:15.159 slat (usec): min=1787, max=2113.7k, avg=432338.74, stdev=784298.45 00:16:15.159 clat (msec): min=71, max=10870, avg=6608.62, stdev=3396.66 00:16:15.159 lat (msec): min=2105, max=10880, avg=7040.96, stdev=3212.84 00:16:15.159 clat percentiles (msec): 00:16:15.159 | 1.00th=[ 72], 5.00th=[ 2106], 10.00th=[ 2140], 20.00th=[ 2165], 00:16:15.159 | 30.00th=[ 4329], 40.00th=[ 6409], 50.00th=[ 6477], 60.00th=[ 8490], 00:16:15.159 | 70.00th=[ 9597], 80.00th=[ 9597], 90.00th=[10805], 95.00th=[10805], 00:16:15.159 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:16:15.159 | 99.99th=[10805] 00:16:15.159 lat (msec) : 100=4.00%, >=2000=96.00% 00:16:15.159 cpu : usr=0.01%, sys=0.18%, ctx=91, majf=0, minf=6401 00:16:15.159 IO depths : 1=4.0%, 2=8.0%, 4=16.0%, 8=32.0%, 16=40.0%, 32=0.0%, >=64=0.0% 00:16:15.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.159 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:16:15.159 issued rwts: total=25,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.159 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.159 job3: (groupid=0, jobs=1): err= 0: pid=3784722: Thu Nov 7 10:44:42 2024 00:16:15.159 read: IOPS=35, BW=35.4MiB/s (37.1MB/s)(495MiB/13985msec) 00:16:15.159 slat (usec): min=37, max=2051.9k, avg=23921.74, stdev=163015.35 00:16:15.159 clat (msec): min=915, max=12812, avg=3151.55, stdev=3361.88 00:16:15.159 lat (msec): min=919, max=12840, avg=3175.47, stdev=3369.51 00:16:15.159 clat percentiles (msec): 00:16:15.159 | 1.00th=[ 919], 5.00th=[ 953], 10.00th=[ 995], 20.00th=[ 1045], 00:16:15.159 | 30.00th=[ 1116], 40.00th=[ 1150], 50.00th=[ 1217], 60.00th=[ 1250], 00:16:15.159 | 70.00th=[ 1334], 80.00th=[ 8557], 90.00th=[ 9060], 95.00th=[ 9194], 00:16:15.159 | 99.00th=[ 9463], 99.50th=[ 9463], 99.90th=[12818], 99.95th=[12818], 00:16:15.159 | 99.99th=[12818] 00:16:15.159 bw ( KiB/s): min= 2052, max=129024, per=2.42%, avg=68382.73, stdev=49312.06, samples=11 00:16:15.159 iops : min= 2, max= 126, avg=66.73, stdev=48.23, samples=11 00:16:15.159 lat (msec) : 1000=13.13%, 2000=57.98%, >=2000=28.89% 00:16:15.159 cpu : usr=0.01%, sys=0.87%, ctx=685, majf=0, minf=32769 00:16:15.159 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.5%, >=64=87.3% 00:16:15.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.160 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:16:15.160 issued rwts: total=495,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.160 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.160 job3: (groupid=0, jobs=1): err= 0: pid=3784723: Thu Nov 7 10:44:42 2024 00:16:15.160 read: IOPS=74, BW=74.3MiB/s (77.9MB/s)(809MiB/10894msec) 00:16:15.160 slat (usec): min=49, max=2031.2k, avg=13374.08, stdev=133773.68 00:16:15.160 clat (msec): min=71, max=6451, avg=1541.91, stdev=1459.36 00:16:15.160 lat (msec): min=231, max=8096, avg=1555.29, stdev=1473.84 00:16:15.160 clat percentiles (msec): 00:16:15.160 | 1.00th=[ 239], 5.00th=[ 243], 10.00th=[ 245], 20.00th=[ 305], 00:16:15.160 | 30.00th=[ 363], 40.00th=[ 435], 50.00th=[ 1687], 60.00th=[ 1871], 00:16:15.160 | 70.00th=[ 1888], 80.00th=[ 1972], 90.00th=[ 4245], 95.00th=[ 4329], 00:16:15.160 | 99.00th=[ 6074], 99.50th=[ 6074], 99.90th=[ 6477], 99.95th=[ 6477], 00:16:15.160 | 99.99th=[ 6477] 00:16:15.160 bw ( KiB/s): min=12288, max=348160, per=7.06%, avg=199241.14, stdev=134419.72, samples=7 00:16:15.160 iops : min= 12, max= 340, avg=194.57, stdev=131.27, samples=7 00:16:15.160 lat (msec) : 100=0.12%, 250=11.74%, 500=31.03%, 750=5.44%, 1000=0.49% 00:16:15.160 lat (msec) : 2000=34.73%, >=2000=16.44% 00:16:15.160 cpu : usr=0.01%, sys=1.56%, ctx=742, majf=0, minf=32769 00:16:15.160 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.2% 00:16:15.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.160 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:15.160 issued rwts: total=809,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.160 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.160 job3: (groupid=0, jobs=1): err= 0: pid=3784724: Thu Nov 7 10:44:42 2024 00:16:15.160 read: IOPS=18, BW=18.9MiB/s (19.8MB/s)(204MiB/10821msec) 00:16:15.160 slat (usec): min=74, max=2078.2k, avg=52674.80, stdev=293894.41 00:16:15.160 clat (msec): min=73, max=9195, avg=5780.11, stdev=3362.17 00:16:15.160 lat (msec): min=813, max=9196, avg=5832.79, stdev=3337.22 00:16:15.160 clat percentiles (msec): 00:16:15.160 | 1.00th=[ 810], 5.00th=[ 919], 10.00th=[ 927], 20.00th=[ 944], 00:16:15.160 | 30.00th=[ 2198], 40.00th=[ 5000], 50.00th=[ 7080], 60.00th=[ 8658], 00:16:15.160 | 70.00th=[ 8792], 80.00th=[ 8926], 90.00th=[ 8926], 95.00th=[ 9060], 00:16:15.160 | 99.00th=[ 9060], 99.50th=[ 9194], 99.90th=[ 9194], 99.95th=[ 9194], 00:16:15.160 | 99.99th=[ 9194] 00:16:15.160 bw ( KiB/s): min= 8192, max=91976, per=1.10%, avg=31092.80, stdev=36137.94, samples=5 00:16:15.160 iops : min= 8, max= 89, avg=30.20, stdev=34.95, samples=5 00:16:15.160 lat (msec) : 100=0.49%, 1000=19.61%, 2000=1.96%, >=2000=77.94% 00:16:15.160 cpu : usr=0.01%, sys=1.08%, ctx=135, majf=0, minf=32769 00:16:15.160 IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=3.9%, 16=7.8%, 32=15.7%, >=64=69.1% 00:16:15.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.160 complete : 0=0.0%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.3% 00:16:15.160 issued rwts: total=204,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.160 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.160 job4: (groupid=0, jobs=1): err= 0: pid=3784747: Thu Nov 7 10:44:42 2024 00:16:15.160 read: IOPS=113, BW=114MiB/s (120MB/s)(1376MiB/12073msec) 00:16:15.160 slat (usec): min=48, max=2082.7k, avg=8696.86, stdev=110709.30 00:16:15.160 clat (msec): min=100, max=8789, avg=1089.23, stdev=2358.55 00:16:15.160 lat (msec): min=119, max=8790, avg=1097.92, stdev=2366.87 00:16:15.160 clat percentiles (msec): 00:16:15.160 | 1.00th=[ 122], 5.00th=[ 123], 10.00th=[ 124], 20.00th=[ 127], 00:16:15.160 | 30.00th=[ 165], 40.00th=[ 245], 50.00th=[ 268], 60.00th=[ 351], 00:16:15.160 | 70.00th=[ 447], 80.00th=[ 642], 90.00th=[ 2299], 95.00th=[ 8792], 00:16:15.160 | 99.00th=[ 8792], 99.50th=[ 8792], 99.90th=[ 8792], 99.95th=[ 8792], 00:16:15.160 | 99.99th=[ 8792] 00:16:15.160 bw ( KiB/s): min=10240, max=792576, per=10.06%, avg=283845.22, stdev=285238.64, samples=9 00:16:15.160 iops : min= 10, max= 774, avg=277.11, stdev=278.58, samples=9 00:16:15.160 lat (msec) : 250=44.69%, 500=27.47%, 750=13.52%, 1000=3.71%, >=2000=10.61% 00:16:15.160 cpu : usr=0.07%, sys=1.70%, ctx=1365, majf=0, minf=32770 00:16:15.160 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.3%, >=64=95.4% 00:16:15.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.160 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:15.160 issued rwts: total=1376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.160 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.160 job4: (groupid=0, jobs=1): err= 0: pid=3784748: Thu Nov 7 10:44:42 2024 00:16:15.160 read: IOPS=19, BW=19.2MiB/s (20.2MB/s)(232MiB/12054msec) 00:16:15.160 slat (usec): min=60, max=2142.2k, avg=51543.69, stdev=251748.40 00:16:15.160 clat (msec): min=94, max=10780, avg=5242.79, stdev=1589.81 00:16:15.160 lat (msec): min=2129, max=10798, avg=5294.33, stdev=1584.21 00:16:15.160 clat percentiles (msec): 00:16:15.160 | 1.00th=[ 2140], 5.00th=[ 2668], 10.00th=[ 2735], 20.00th=[ 2869], 00:16:15.160 | 30.00th=[ 5269], 40.00th=[ 5537], 50.00th=[ 5671], 60.00th=[ 5738], 00:16:15.160 | 70.00th=[ 5873], 80.00th=[ 6074], 90.00th=[ 7416], 95.00th=[ 7617], 00:16:15.160 | 99.00th=[ 7684], 99.50th=[ 8658], 99.90th=[10805], 99.95th=[10805], 00:16:15.160 | 99.99th=[10805] 00:16:15.160 bw ( KiB/s): min=10240, max=100352, per=1.26%, avg=35498.67, stdev=34964.26, samples=6 00:16:15.160 iops : min= 10, max= 98, avg=34.67, stdev=34.14, samples=6 00:16:15.160 lat (msec) : 100=0.43%, >=2000=99.57% 00:16:15.160 cpu : usr=0.00%, sys=0.89%, ctx=686, majf=0, minf=32769 00:16:15.160 IO depths : 1=0.4%, 2=0.9%, 4=1.7%, 8=3.4%, 16=6.9%, 32=13.8%, >=64=72.8% 00:16:15.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.160 complete : 0=0.0%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.9% 00:16:15.160 issued rwts: total=232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.160 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.160 job4: (groupid=0, jobs=1): err= 0: pid=3784749: Thu Nov 7 10:44:42 2024 00:16:15.160 read: IOPS=6, BW=6736KiB/s (6898kB/s)(79.0MiB/12009msec) 00:16:15.160 slat (usec): min=480, max=2130.5k, avg=150723.48, stdev=497537.91 00:16:15.160 clat (msec): min=100, max=11993, avg=9463.51, stdev=3389.33 00:16:15.160 lat (msec): min=2119, max=12008, avg=9614.23, stdev=3228.57 00:16:15.160 clat percentiles (msec): 00:16:15.160 | 1.00th=[ 102], 5.00th=[ 2165], 10.00th=[ 2265], 20.00th=[ 6409], 00:16:15.160 | 30.00th=[10805], 40.00th=[10939], 50.00th=[11073], 60.00th=[11342], 00:16:15.160 | 70.00th=[11476], 80.00th=[11610], 90.00th=[11879], 95.00th=[11879], 00:16:15.160 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:16:15.160 | 99.99th=[12013] 00:16:15.160 lat (msec) : 250=1.27%, >=2000=98.73% 00:16:15.160 cpu : usr=0.00%, sys=0.58%, ctx=272, majf=0, minf=20225 00:16:15.160 IO depths : 1=1.3%, 2=2.5%, 4=5.1%, 8=10.1%, 16=20.3%, 32=40.5%, >=64=20.3% 00:16:15.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.160 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:15.160 issued rwts: total=79,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.160 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.160 job4: (groupid=0, jobs=1): err= 0: pid=3784750: Thu Nov 7 10:44:42 2024 00:16:15.160 read: IOPS=7, BW=7865KiB/s (8054kB/s)(92.0MiB/11978msec) 00:16:15.160 slat (usec): min=1676, max=2130.5k, avg=129093.60, stdev=465824.81 00:16:15.160 clat (msec): min=100, max=11967, avg=9600.67, stdev=3214.55 00:16:15.160 lat (msec): min=2119, max=11977, avg=9729.76, stdev=3063.80 00:16:15.160 clat percentiles (msec): 00:16:15.160 | 1.00th=[ 102], 5.00th=[ 2198], 10.00th=[ 4329], 20.00th=[ 6544], 00:16:15.160 | 30.00th=[10805], 40.00th=[10939], 50.00th=[11208], 60.00th=[11342], 00:16:15.160 | 70.00th=[11476], 80.00th=[11610], 90.00th=[11745], 95.00th=[11879], 00:16:15.160 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:16:15.160 | 99.99th=[12013] 00:16:15.160 lat (msec) : 250=1.09%, >=2000=98.91% 00:16:15.160 cpu : usr=0.01%, sys=0.63%, ctx=271, majf=0, minf=23553 00:16:15.160 IO depths : 1=1.1%, 2=2.2%, 4=4.3%, 8=8.7%, 16=17.4%, 32=34.8%, >=64=31.5% 00:16:15.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.160 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:15.160 issued rwts: total=92,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.160 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.160 job4: (groupid=0, jobs=1): err= 0: pid=3784751: Thu Nov 7 10:44:42 2024 00:16:15.160 read: IOPS=104, BW=105MiB/s (110MB/s)(1257MiB/12021msec) 00:16:15.160 slat (usec): min=43, max=2055.7k, avg=9483.06, stdev=118965.54 00:16:15.160 clat (msec): min=98, max=7851, avg=696.54, stdev=1384.69 00:16:15.160 lat (msec): min=155, max=7855, avg=706.02, stdev=1399.82 00:16:15.160 clat percentiles (msec): 00:16:15.160 | 1.00th=[ 176], 5.00th=[ 180], 10.00th=[ 199], 20.00th=[ 222], 00:16:15.160 | 30.00th=[ 224], 40.00th=[ 224], 50.00th=[ 224], 60.00th=[ 226], 00:16:15.160 | 70.00th=[ 226], 80.00th=[ 230], 90.00th=[ 2333], 95.00th=[ 2433], 00:16:15.160 | 99.00th=[ 7819], 99.50th=[ 7819], 99.90th=[ 7819], 99.95th=[ 7886], 00:16:15.160 | 99.99th=[ 7886] 00:16:15.160 bw ( KiB/s): min=180224, max=632832, per=16.20%, avg=457054.00, stdev=198933.57, samples=5 00:16:15.160 iops : min= 176, max= 618, avg=446.20, stdev=194.40, samples=5 00:16:15.160 lat (msec) : 100=0.08%, 250=83.77%, 500=1.27%, >=2000=14.88% 00:16:15.160 cpu : usr=0.02%, sys=1.44%, ctx=1181, majf=0, minf=32769 00:16:15.160 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.5%, >=64=95.0% 00:16:15.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.160 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:15.160 issued rwts: total=1257,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.160 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.160 job4: (groupid=0, jobs=1): err= 0: pid=3784752: Thu Nov 7 10:44:42 2024 00:16:15.160 read: IOPS=44, BW=44.1MiB/s (46.3MB/s)(526MiB/11919msec) 00:16:15.160 slat (usec): min=43, max=2112.7k, avg=22475.63, stdev=184162.72 00:16:15.160 clat (msec): min=95, max=8613, avg=1820.21, stdev=1914.54 00:16:15.160 lat (msec): min=344, max=9497, avg=1842.69, stdev=1941.47 00:16:15.160 clat percentiles (msec): 00:16:15.160 | 1.00th=[ 347], 5.00th=[ 347], 10.00th=[ 351], 20.00th=[ 372], 00:16:15.160 | 30.00th=[ 401], 40.00th=[ 435], 50.00th=[ 567], 60.00th=[ 919], 00:16:15.160 | 70.00th=[ 2500], 80.00th=[ 4396], 90.00th=[ 4597], 95.00th=[ 4665], 00:16:15.160 | 99.00th=[ 6275], 99.50th=[ 6342], 99.90th=[ 8658], 99.95th=[ 8658], 00:16:15.160 | 99.99th=[ 8658] 00:16:15.160 bw ( KiB/s): min=24576, max=372736, per=5.75%, avg=162256.80, stdev=163291.79, samples=5 00:16:15.161 iops : min= 24, max= 364, avg=158.40, stdev=159.52, samples=5 00:16:15.161 lat (msec) : 100=0.19%, 500=46.96%, 750=8.37%, 1000=6.84%, 2000=2.47% 00:16:15.161 lat (msec) : >=2000=35.17% 00:16:15.161 cpu : usr=0.02%, sys=0.76%, ctx=655, majf=0, minf=32769 00:16:15.161 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.0%, 32=6.1%, >=64=88.0% 00:16:15.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.161 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:16:15.161 issued rwts: total=526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.161 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.161 job4: (groupid=0, jobs=1): err= 0: pid=3784753: Thu Nov 7 10:44:42 2024 00:16:15.161 read: IOPS=22, BW=22.9MiB/s (24.0MB/s)(296MiB/12948msec) 00:16:15.161 slat (usec): min=65, max=2109.4k, avg=36473.95, stdev=238351.96 00:16:15.161 clat (msec): min=948, max=11541, avg=5322.61, stdev=4781.25 00:16:15.161 lat (msec): min=953, max=11546, avg=5359.08, stdev=4787.27 00:16:15.161 clat percentiles (msec): 00:16:15.161 | 1.00th=[ 953], 5.00th=[ 978], 10.00th=[ 995], 20.00th=[ 1020], 00:16:15.161 | 30.00th=[ 1053], 40.00th=[ 1070], 50.00th=[ 1116], 60.00th=[ 7349], 00:16:15.161 | 70.00th=[10805], 80.00th=[11073], 90.00th=[11342], 95.00th=[11476], 00:16:15.161 | 99.00th=[11476], 99.50th=[11476], 99.90th=[11476], 99.95th=[11476], 00:16:15.161 | 99.99th=[11476] 00:16:15.161 bw ( KiB/s): min= 2048, max=145408, per=1.75%, avg=49444.57, stdev=57227.20, samples=7 00:16:15.161 iops : min= 2, max= 142, avg=48.29, stdev=55.89, samples=7 00:16:15.161 lat (msec) : 1000=11.49%, 2000=41.22%, >=2000=47.30% 00:16:15.161 cpu : usr=0.04%, sys=1.27%, ctx=394, majf=0, minf=32769 00:16:15.161 IO depths : 1=0.3%, 2=0.7%, 4=1.4%, 8=2.7%, 16=5.4%, 32=10.8%, >=64=78.7% 00:16:15.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.161 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:16:15.161 issued rwts: total=296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.161 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.161 job4: (groupid=0, jobs=1): err= 0: pid=3784754: Thu Nov 7 10:44:42 2024 00:16:15.161 read: IOPS=8, BW=8206KiB/s (8403kB/s)(95.0MiB/11855msec) 00:16:15.161 slat (usec): min=424, max=4296.4k, avg=105504.82, stdev=526375.00 00:16:15.161 clat (msec): min=1831, max=11836, avg=8516.41, stdev=3953.85 00:16:15.161 lat (msec): min=1920, max=11854, avg=8621.91, stdev=3907.02 00:16:15.161 clat percentiles (msec): 00:16:15.161 | 1.00th=[ 1838], 5.00th=[ 1921], 10.00th=[ 1938], 20.00th=[ 2123], 00:16:15.161 | 30.00th=[ 4279], 40.00th=[10805], 50.00th=[10939], 60.00th=[11073], 00:16:15.161 | 70.00th=[11208], 80.00th=[11342], 90.00th=[11610], 95.00th=[11745], 00:16:15.161 | 99.00th=[11879], 99.50th=[11879], 99.90th=[11879], 99.95th=[11879], 00:16:15.161 | 99.99th=[11879] 00:16:15.161 lat (msec) : 2000=10.53%, >=2000=89.47% 00:16:15.161 cpu : usr=0.02%, sys=0.51%, ctx=286, majf=0, minf=24321 00:16:15.161 IO depths : 1=1.1%, 2=2.1%, 4=4.2%, 8=8.4%, 16=16.8%, 32=33.7%, >=64=33.7% 00:16:15.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.161 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:15.161 issued rwts: total=95,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.161 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.161 job4: (groupid=0, jobs=1): err= 0: pid=3784755: Thu Nov 7 10:44:42 2024 00:16:15.161 read: IOPS=57, BW=57.5MiB/s (60.3MB/s)(683MiB/11869msec) 00:16:15.161 slat (usec): min=48, max=2039.8k, avg=17233.69, stdev=130858.27 00:16:15.161 clat (msec): min=94, max=3382, avg=2095.33, stdev=1048.74 00:16:15.161 lat (msec): min=712, max=3384, avg=2112.56, stdev=1045.35 00:16:15.161 clat percentiles (msec): 00:16:15.161 | 1.00th=[ 718], 5.00th=[ 735], 10.00th=[ 751], 20.00th=[ 793], 00:16:15.161 | 30.00th=[ 1062], 40.00th=[ 1234], 50.00th=[ 2836], 60.00th=[ 2937], 00:16:15.161 | 70.00th=[ 3004], 80.00th=[ 3071], 90.00th=[ 3138], 95.00th=[ 3239], 00:16:15.161 | 99.00th=[ 3339], 99.50th=[ 3373], 99.90th=[ 3373], 99.95th=[ 3373], 00:16:15.161 | 99.99th=[ 3373] 00:16:15.161 bw ( KiB/s): min= 2048, max=172032, per=3.10%, avg=87395.69, stdev=61007.82, samples=13 00:16:15.161 iops : min= 2, max= 168, avg=85.31, stdev=59.63, samples=13 00:16:15.161 lat (msec) : 100=0.15%, 750=10.40%, 1000=13.47%, 2000=20.20%, >=2000=55.78% 00:16:15.161 cpu : usr=0.03%, sys=1.51%, ctx=861, majf=0, minf=32769 00:16:15.161 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.3%, 32=4.7%, >=64=90.8% 00:16:15.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.161 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:16:15.161 issued rwts: total=683,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.161 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.161 job4: (groupid=0, jobs=1): err= 0: pid=3784756: Thu Nov 7 10:44:42 2024 00:16:15.161 read: IOPS=2, BW=2941KiB/s (3012kB/s)(37.0MiB/12881msec) 00:16:15.161 slat (usec): min=1582, max=2091.0k, avg=290028.48, stdev=710490.80 00:16:15.161 clat (msec): min=2149, max=12878, avg=7655.44, stdev=3172.76 00:16:15.161 lat (msec): min=4184, max=12880, avg=7945.46, stdev=3145.84 00:16:15.161 clat percentiles (msec): 00:16:15.161 | 1.00th=[ 2165], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 4245], 00:16:15.161 | 30.00th=[ 6342], 40.00th=[ 6409], 50.00th=[ 6409], 60.00th=[ 8490], 00:16:15.161 | 70.00th=[10671], 80.00th=[10671], 90.00th=[12818], 95.00th=[12818], 00:16:15.161 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:16:15.161 | 99.99th=[12818] 00:16:15.161 lat (msec) : >=2000=100.00% 00:16:15.161 cpu : usr=0.00%, sys=0.29%, ctx=54, majf=0, minf=9473 00:16:15.161 IO depths : 1=2.7%, 2=5.4%, 4=10.8%, 8=21.6%, 16=43.2%, 32=16.2%, >=64=0.0% 00:16:15.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.161 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:15.161 issued rwts: total=37,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.161 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.161 job4: (groupid=0, jobs=1): err= 0: pid=3784757: Thu Nov 7 10:44:42 2024 00:16:15.161 read: IOPS=5, BW=6075KiB/s (6221kB/s)(71.0MiB/11967msec) 00:16:15.161 slat (usec): min=787, max=2066.4k, avg=167661.72, stdev=528238.36 00:16:15.161 clat (msec): min=62, max=11951, avg=9493.42, stdev=2719.88 00:16:15.161 lat (msec): min=2094, max=11966, avg=9661.08, stdev=2487.15 00:16:15.161 clat percentiles (msec): 00:16:15.161 | 1.00th=[ 63], 5.00th=[ 2165], 10.00th=[ 4329], 20.00th=[ 8557], 00:16:15.161 | 30.00th=[10537], 40.00th=[10537], 50.00th=[10537], 60.00th=[10537], 00:16:15.161 | 70.00th=[10671], 80.00th=[10671], 90.00th=[11879], 95.00th=[11879], 00:16:15.161 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:16:15.161 | 99.99th=[12013] 00:16:15.161 lat (msec) : 100=1.41%, >=2000=98.59% 00:16:15.161 cpu : usr=0.03%, sys=0.33%, ctx=167, majf=0, minf=18177 00:16:15.161 IO depths : 1=1.4%, 2=2.8%, 4=5.6%, 8=11.3%, 16=22.5%, 32=45.1%, >=64=11.3% 00:16:15.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.161 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:15.161 issued rwts: total=71,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.161 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.161 job4: (groupid=0, jobs=1): err= 0: pid=3784758: Thu Nov 7 10:44:42 2024 00:16:15.161 read: IOPS=21, BW=21.3MiB/s (22.3MB/s)(252MiB/11853msec) 00:16:15.161 slat (usec): min=55, max=2054.5k, avg=39720.82, stdev=253690.82 00:16:15.161 clat (msec): min=734, max=11086, avg=5782.00, stdev=4176.32 00:16:15.161 lat (msec): min=736, max=11089, avg=5821.72, stdev=4180.25 00:16:15.161 clat percentiles (msec): 00:16:15.161 | 1.00th=[ 735], 5.00th=[ 735], 10.00th=[ 751], 20.00th=[ 768], 00:16:15.161 | 30.00th=[ 1938], 40.00th=[ 4144], 50.00th=[ 6275], 60.00th=[ 7080], 00:16:15.161 | 70.00th=[ 9194], 80.00th=[10805], 90.00th=[10939], 95.00th=[11073], 00:16:15.161 | 99.00th=[11073], 99.50th=[11073], 99.90th=[11073], 99.95th=[11073], 00:16:15.161 | 99.99th=[11073] 00:16:15.161 bw ( KiB/s): min=10240, max=88064, per=1.28%, avg=36002.14, stdev=28266.91, samples=7 00:16:15.161 iops : min= 10, max= 86, avg=35.14, stdev=27.61, samples=7 00:16:15.161 lat (msec) : 750=9.92%, 1000=19.05%, 2000=3.17%, >=2000=67.86% 00:16:15.161 cpu : usr=0.01%, sys=1.20%, ctx=265, majf=0, minf=32769 00:16:15.161 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.2%, 16=6.3%, 32=12.7%, >=64=75.0% 00:16:15.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.161 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:16:15.161 issued rwts: total=252,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.161 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.161 job4: (groupid=0, jobs=1): err= 0: pid=3784759: Thu Nov 7 10:44:42 2024 00:16:15.161 read: IOPS=7, BW=7212KiB/s (7386kB/s)(85.0MiB/12068msec) 00:16:15.161 slat (usec): min=962, max=2059.3k, avg=140779.82, stdev=493100.89 00:16:15.161 clat (msec): min=100, max=12064, avg=9090.65, stdev=3443.01 00:16:15.161 lat (msec): min=2129, max=12067, avg=9231.43, stdev=3313.26 00:16:15.161 clat percentiles (msec): 00:16:15.161 | 1.00th=[ 102], 5.00th=[ 2198], 10.00th=[ 4279], 20.00th=[ 6409], 00:16:15.161 | 30.00th=[ 6544], 40.00th=[ 8658], 50.00th=[10805], 60.00th=[11879], 00:16:15.161 | 70.00th=[12013], 80.00th=[12013], 90.00th=[12013], 95.00th=[12013], 00:16:15.161 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:16:15.161 | 99.99th=[12013] 00:16:15.161 lat (msec) : 250=1.18%, >=2000=98.82% 00:16:15.161 cpu : usr=0.00%, sys=0.74%, ctx=102, majf=0, minf=21761 00:16:15.161 IO depths : 1=1.2%, 2=2.4%, 4=4.7%, 8=9.4%, 16=18.8%, 32=37.6%, >=64=25.9% 00:16:15.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.161 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:15.161 issued rwts: total=85,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.161 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.161 job5: (groupid=0, jobs=1): err= 0: pid=3784773: Thu Nov 7 10:44:42 2024 00:16:15.161 read: IOPS=75, BW=75.6MiB/s (79.3MB/s)(757MiB/10015msec) 00:16:15.161 slat (usec): min=39, max=2061.5k, avg=13209.33, stdev=135108.42 00:16:15.161 clat (msec): min=13, max=7998, avg=532.07, stdev=989.08 00:16:15.161 lat (msec): min=14, max=8003, avg=545.28, stdev=1025.71 00:16:15.161 clat percentiles (msec): 00:16:15.161 | 1.00th=[ 20], 5.00th=[ 46], 10.00th=[ 86], 20.00th=[ 342], 00:16:15.161 | 30.00th=[ 355], 40.00th=[ 368], 50.00th=[ 393], 60.00th=[ 451], 00:16:15.161 | 70.00th=[ 493], 80.00th=[ 514], 90.00th=[ 527], 95.00th=[ 535], 00:16:15.161 | 99.00th=[ 6879], 99.50th=[ 8020], 99.90th=[ 8020], 99.95th=[ 8020], 00:16:15.161 | 99.99th=[ 8020] 00:16:15.161 bw ( KiB/s): min=10240, max=372736, per=8.47%, avg=239104.00, stdev=160411.05, samples=4 00:16:15.161 iops : min= 10, max= 364, avg=233.50, stdev=156.65, samples=4 00:16:15.162 lat (msec) : 20=1.06%, 50=4.49%, 100=6.21%, 250=4.36%, 500=56.14% 00:16:15.162 lat (msec) : 750=24.97%, >=2000=2.77% 00:16:15.162 cpu : usr=0.03%, sys=1.09%, ctx=1365, majf=0, minf=32769 00:16:15.162 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=4.2%, >=64=91.7% 00:16:15.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.162 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:16:15.162 issued rwts: total=757,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.162 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.162 job5: (groupid=0, jobs=1): err= 0: pid=3784774: Thu Nov 7 10:44:42 2024 00:16:15.162 read: IOPS=184, BW=185MiB/s (194MB/s)(1851MiB/10011msec) 00:16:15.162 slat (usec): min=50, max=2031.4k, avg=5399.12, stdev=63321.29 00:16:15.162 clat (msec): min=9, max=4779, avg=661.69, stdev=1028.64 00:16:15.162 lat (msec): min=10, max=4782, avg=667.09, stdev=1035.70 00:16:15.162 clat percentiles (msec): 00:16:15.162 | 1.00th=[ 25], 5.00th=[ 88], 10.00th=[ 194], 20.00th=[ 209], 00:16:15.162 | 30.00th=[ 257], 40.00th=[ 313], 50.00th=[ 334], 60.00th=[ 456], 00:16:15.162 | 70.00th=[ 493], 80.00th=[ 558], 90.00th=[ 776], 95.00th=[ 2937], 00:16:15.162 | 99.00th=[ 4732], 99.50th=[ 4732], 99.90th=[ 4799], 99.95th=[ 4799], 00:16:15.162 | 99.99th=[ 4799] 00:16:15.162 bw ( KiB/s): min=16384, max=606208, per=8.07%, avg=227800.62, stdev=176570.29, samples=13 00:16:15.162 iops : min= 16, max= 592, avg=222.46, stdev=172.43, samples=13 00:16:15.162 lat (msec) : 10=0.05%, 20=0.65%, 50=1.94%, 100=3.19%, 250=23.45% 00:16:15.162 lat (msec) : 500=42.73%, 750=16.69%, 1000=1.94%, >=2000=9.35% 00:16:15.162 cpu : usr=0.06%, sys=2.37%, ctx=4126, majf=0, minf=32769 00:16:15.162 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:16:15.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.162 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:15.162 issued rwts: total=1851,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.162 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.162 job5: (groupid=0, jobs=1): err= 0: pid=3784775: Thu Nov 7 10:44:42 2024 00:16:15.162 read: IOPS=55, BW=55.5MiB/s (58.2MB/s)(557MiB/10036msec) 00:16:15.162 slat (usec): min=89, max=2123.9k, avg=17954.95, stdev=152644.40 00:16:15.162 clat (msec): min=31, max=6808, avg=1483.08, stdev=1353.38 00:16:15.162 lat (msec): min=38, max=6821, avg=1501.04, stdev=1373.77 00:16:15.162 clat percentiles (msec): 00:16:15.162 | 1.00th=[ 68], 5.00th=[ 215], 10.00th=[ 351], 20.00th=[ 355], 00:16:15.162 | 30.00th=[ 401], 40.00th=[ 514], 50.00th=[ 1011], 60.00th=[ 1368], 00:16:15.162 | 70.00th=[ 1586], 80.00th=[ 2970], 90.00th=[ 3440], 95.00th=[ 3775], 00:16:15.162 | 99.00th=[ 4799], 99.50th=[ 4799], 99.90th=[ 6812], 99.95th=[ 6812], 00:16:15.162 | 99.99th=[ 6812] 00:16:15.162 bw ( KiB/s): min= 4096, max=249856, per=3.80%, avg=107373.71, stdev=78866.68, samples=7 00:16:15.162 iops : min= 4, max= 244, avg=104.86, stdev=77.02, samples=7 00:16:15.162 lat (msec) : 50=0.72%, 100=1.08%, 250=3.77%, 500=32.50%, 750=8.44% 00:16:15.162 lat (msec) : 1000=3.41%, 2000=21.18%, >=2000=28.90% 00:16:15.162 cpu : usr=0.03%, sys=1.22%, ctx=1307, majf=0, minf=32769 00:16:15.162 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.9%, 32=5.7%, >=64=88.7% 00:16:15.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.162 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:16:15.162 issued rwts: total=557,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.162 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.162 job5: (groupid=0, jobs=1): err= 0: pid=3784776: Thu Nov 7 10:44:42 2024 00:16:15.162 read: IOPS=142, BW=143MiB/s (150MB/s)(1722MiB/12060msec) 00:16:15.162 slat (usec): min=46, max=2068.3k, avg=6966.29, stdev=89735.19 00:16:15.162 clat (msec): min=58, max=5687, avg=639.37, stdev=1100.08 00:16:15.162 lat (msec): min=122, max=5689, avg=646.33, stdev=1108.85 00:16:15.162 clat percentiles (msec): 00:16:15.162 | 1.00th=[ 123], 5.00th=[ 124], 10.00th=[ 124], 20.00th=[ 125], 00:16:15.162 | 30.00th=[ 125], 40.00th=[ 126], 50.00th=[ 128], 60.00th=[ 213], 00:16:15.162 | 70.00th=[ 347], 80.00th=[ 860], 90.00th=[ 2265], 95.00th=[ 2937], 00:16:15.162 | 99.00th=[ 5671], 99.50th=[ 5671], 99.90th=[ 5671], 99.95th=[ 5671], 00:16:15.162 | 99.99th=[ 5671] 00:16:15.162 bw ( KiB/s): min=75776, max=1046528, per=12.85%, avg=362723.56, stdev=350494.49, samples=9 00:16:15.162 iops : min= 74, max= 1022, avg=354.22, stdev=342.28, samples=9 00:16:15.162 lat (msec) : 100=0.06%, 250=64.00%, 500=14.00%, 750=1.28%, 1000=1.63% 00:16:15.162 lat (msec) : 2000=8.13%, >=2000=10.92% 00:16:15.162 cpu : usr=0.03%, sys=1.82%, ctx=1809, majf=0, minf=32769 00:16:15.162 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.9%, >=64=96.3% 00:16:15.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.162 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:15.162 issued rwts: total=1722,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.162 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.162 job5: (groupid=0, jobs=1): err= 0: pid=3784777: Thu Nov 7 10:44:42 2024 00:16:15.162 read: IOPS=86, BW=86.1MiB/s (90.3MB/s)(867MiB/10064msec) 00:16:15.162 slat (usec): min=59, max=2063.8k, avg=11531.52, stdev=106936.72 00:16:15.162 clat (msec): min=62, max=4743, avg=1437.25, stdev=1448.37 00:16:15.162 lat (msec): min=64, max=4746, avg=1448.79, stdev=1452.05 00:16:15.162 clat percentiles (msec): 00:16:15.162 | 1.00th=[ 86], 5.00th=[ 359], 10.00th=[ 363], 20.00th=[ 418], 00:16:15.162 | 30.00th=[ 451], 40.00th=[ 542], 50.00th=[ 575], 60.00th=[ 667], 00:16:15.162 | 70.00th=[ 2123], 80.00th=[ 2735], 90.00th=[ 4597], 95.00th=[ 4665], 00:16:15.162 | 99.00th=[ 4732], 99.50th=[ 4732], 99.90th=[ 4732], 99.95th=[ 4732], 00:16:15.162 | 99.99th=[ 4732] 00:16:15.162 bw ( KiB/s): min=14336, max=348160, per=4.88%, avg=137774.55, stdev=113408.51, samples=11 00:16:15.162 iops : min= 14, max= 340, avg=134.55, stdev=110.75, samples=11 00:16:15.162 lat (msec) : 100=2.77%, 250=1.04%, 500=31.49%, 750=25.84%, 1000=1.04% 00:16:15.162 lat (msec) : 2000=5.54%, >=2000=32.30% 00:16:15.162 cpu : usr=0.06%, sys=1.83%, ctx=1938, majf=0, minf=32769 00:16:15.162 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=3.7%, >=64=92.7% 00:16:15.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.162 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:15.162 issued rwts: total=867,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.162 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.162 job5: (groupid=0, jobs=1): err= 0: pid=3784778: Thu Nov 7 10:44:42 2024 00:16:15.162 read: IOPS=177, BW=178MiB/s (186MB/s)(1780MiB/10015msec) 00:16:15.162 slat (usec): min=59, max=966160, avg=5614.59, stdev=24224.72 00:16:15.162 clat (msec): min=13, max=2850, avg=652.83, stdev=460.17 00:16:15.162 lat (msec): min=16, max=2870, avg=658.45, stdev=462.87 00:16:15.162 clat percentiles (msec): 00:16:15.162 | 1.00th=[ 63], 5.00th=[ 220], 10.00th=[ 300], 20.00th=[ 317], 00:16:15.162 | 30.00th=[ 334], 40.00th=[ 388], 50.00th=[ 485], 60.00th=[ 567], 00:16:15.162 | 70.00th=[ 709], 80.00th=[ 978], 90.00th=[ 1603], 95.00th=[ 1720], 00:16:15.162 | 99.00th=[ 1787], 99.50th=[ 1804], 99.90th=[ 2802], 99.95th=[ 2836], 00:16:15.162 | 99.99th=[ 2836] 00:16:15.162 bw ( KiB/s): min= 2048, max=395264, per=7.04%, avg=198792.53, stdev=121028.87, samples=15 00:16:15.162 iops : min= 2, max= 386, avg=194.13, stdev=118.19, samples=15 00:16:15.162 lat (msec) : 20=0.17%, 50=0.56%, 100=0.56%, 250=4.72%, 500=46.07% 00:16:15.162 lat (msec) : 750=19.89%, 1000=11.40%, 2000=16.46%, >=2000=0.17% 00:16:15.162 cpu : usr=0.02%, sys=2.06%, ctx=4430, majf=0, minf=32769 00:16:15.162 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.8%, >=64=96.5% 00:16:15.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.162 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:15.162 issued rwts: total=1780,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.162 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.162 job5: (groupid=0, jobs=1): err= 0: pid=3784779: Thu Nov 7 10:44:42 2024 00:16:15.162 read: IOPS=117, BW=117MiB/s (123MB/s)(1659MiB/14155msec) 00:16:15.162 slat (usec): min=48, max=2086.6k, avg=7253.98, stdev=104215.16 00:16:15.162 clat (msec): min=116, max=6658, avg=894.39, stdev=1916.55 00:16:15.162 lat (msec): min=117, max=6663, avg=901.64, stdev=1924.84 00:16:15.162 clat percentiles (msec): 00:16:15.162 | 1.00th=[ 118], 5.00th=[ 120], 10.00th=[ 120], 20.00th=[ 120], 00:16:15.162 | 30.00th=[ 121], 40.00th=[ 121], 50.00th=[ 131], 60.00th=[ 230], 00:16:15.162 | 70.00th=[ 234], 80.00th=[ 245], 90.00th=[ 5604], 95.00th=[ 6544], 00:16:15.162 | 99.00th=[ 6611], 99.50th=[ 6678], 99.90th=[ 6678], 99.95th=[ 6678], 00:16:15.162 | 99.99th=[ 6678] 00:16:15.162 bw ( KiB/s): min= 2052, max=1083392, per=15.88%, avg=448220.00, stdev=391355.60, samples=7 00:16:15.162 iops : min= 2, max= 1058, avg=437.71, stdev=382.18, samples=7 00:16:15.162 lat (msec) : 250=82.52%, 500=3.92%, >=2000=13.56% 00:16:15.162 cpu : usr=0.04%, sys=1.52%, ctx=1574, majf=0, minf=32331 00:16:15.162 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=1.9%, >=64=96.2% 00:16:15.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.162 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:15.162 issued rwts: total=1659,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.162 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.162 job5: (groupid=0, jobs=1): err= 0: pid=3784780: Thu Nov 7 10:44:42 2024 00:16:15.162 read: IOPS=68, BW=68.9MiB/s (72.2MB/s)(691MiB/10031msec) 00:16:15.162 slat (usec): min=45, max=2102.5k, avg=14469.99, stdev=122931.73 00:16:15.162 clat (msec): min=29, max=3362, avg=1608.86, stdev=1093.18 00:16:15.162 lat (msec): min=31, max=5347, avg=1623.33, stdev=1099.37 00:16:15.162 clat percentiles (msec): 00:16:15.162 | 1.00th=[ 64], 5.00th=[ 443], 10.00th=[ 464], 20.00th=[ 514], 00:16:15.162 | 30.00th=[ 676], 40.00th=[ 919], 50.00th=[ 953], 60.00th=[ 2333], 00:16:15.162 | 70.00th=[ 2567], 80.00th=[ 2668], 90.00th=[ 3205], 95.00th=[ 3272], 00:16:15.162 | 99.00th=[ 3339], 99.50th=[ 3339], 99.90th=[ 3373], 99.95th=[ 3373], 00:16:15.162 | 99.99th=[ 3373] 00:16:15.162 bw ( KiB/s): min= 4096, max=302499, per=4.55%, avg=128274.11, stdev=127218.68, samples=9 00:16:15.162 iops : min= 4, max= 295, avg=125.22, stdev=124.17, samples=9 00:16:15.162 lat (msec) : 50=0.58%, 100=0.72%, 250=0.43%, 500=13.17%, 750=17.37% 00:16:15.162 lat (msec) : 1000=20.98%, 2000=3.76%, >=2000=42.98% 00:16:15.162 cpu : usr=0.03%, sys=1.56%, ctx=971, majf=0, minf=32769 00:16:15.162 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.3%, 32=4.6%, >=64=90.9% 00:16:15.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.162 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:16:15.162 issued rwts: total=691,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.163 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.163 job5: (groupid=0, jobs=1): err= 0: pid=3784781: Thu Nov 7 10:44:42 2024 00:16:15.163 read: IOPS=67, BW=67.6MiB/s (70.9MB/s)(678MiB/10030msec) 00:16:15.163 slat (usec): min=53, max=2083.6k, avg=14747.89, stdev=133549.32 00:16:15.163 clat (msec): min=28, max=4593, avg=1470.17, stdev=1559.64 00:16:15.163 lat (msec): min=31, max=4595, avg=1484.91, stdev=1564.16 00:16:15.163 clat percentiles (msec): 00:16:15.163 | 1.00th=[ 62], 5.00th=[ 342], 10.00th=[ 347], 20.00th=[ 359], 00:16:15.163 | 30.00th=[ 384], 40.00th=[ 397], 50.00th=[ 726], 60.00th=[ 877], 00:16:15.163 | 70.00th=[ 961], 80.00th=[ 3742], 90.00th=[ 4178], 95.00th=[ 4396], 00:16:15.163 | 99.00th=[ 4530], 99.50th=[ 4597], 99.90th=[ 4597], 99.95th=[ 4597], 00:16:15.163 | 99.99th=[ 4597] 00:16:15.163 bw ( KiB/s): min=14336, max=374784, per=5.00%, avg=141005.88, stdev=117550.57, samples=8 00:16:15.163 iops : min= 14, max= 366, avg=137.62, stdev=114.75, samples=8 00:16:15.163 lat (msec) : 50=0.74%, 100=0.59%, 250=0.74%, 500=40.86%, 750=9.00% 00:16:15.163 lat (msec) : 1000=20.65%, >=2000=27.43% 00:16:15.163 cpu : usr=0.00%, sys=1.28%, ctx=1385, majf=0, minf=32769 00:16:15.163 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.7%, >=64=90.7% 00:16:15.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.163 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:16:15.163 issued rwts: total=678,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.163 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.163 job5: (groupid=0, jobs=1): err= 0: pid=3784782: Thu Nov 7 10:44:42 2024 00:16:15.163 read: IOPS=71, BW=72.0MiB/s (75.5MB/s)(722MiB/10029msec) 00:16:15.163 slat (usec): min=53, max=2094.0k, avg=13848.83, stdev=138868.38 00:16:15.163 clat (msec): min=27, max=4773, avg=1196.44, stdev=1647.78 00:16:15.163 lat (msec): min=29, max=4778, avg=1210.29, stdev=1654.34 00:16:15.163 clat percentiles (msec): 00:16:15.163 | 1.00th=[ 36], 5.00th=[ 94], 10.00th=[ 326], 20.00th=[ 347], 00:16:15.163 | 30.00th=[ 355], 40.00th=[ 359], 50.00th=[ 397], 60.00th=[ 472], 00:16:15.163 | 70.00th=[ 575], 80.00th=[ 2400], 90.00th=[ 4732], 95.00th=[ 4732], 00:16:15.163 | 99.00th=[ 4799], 99.50th=[ 4799], 99.90th=[ 4799], 99.95th=[ 4799], 00:16:15.163 | 99.99th=[ 4799] 00:16:15.163 bw ( KiB/s): min= 2048, max=335872, per=6.17%, avg=174080.00, stdev=131577.69, samples=7 00:16:15.163 iops : min= 2, max= 328, avg=170.00, stdev=128.49, samples=7 00:16:15.163 lat (msec) : 50=1.25%, 100=4.29%, 250=2.22%, 500=55.26%, 750=16.76% 00:16:15.163 lat (msec) : >=2000=20.22% 00:16:15.163 cpu : usr=0.02%, sys=1.07%, ctx=2018, majf=0, minf=32769 00:16:15.163 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.4%, >=64=91.3% 00:16:15.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.163 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:16:15.163 issued rwts: total=722,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.163 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.163 job5: (groupid=0, jobs=1): err= 0: pid=3784783: Thu Nov 7 10:44:42 2024 00:16:15.163 read: IOPS=77, BW=77.9MiB/s (81.7MB/s)(781MiB/10021msec) 00:16:15.163 slat (usec): min=59, max=2056.9k, avg=12800.68, stdev=104708.08 00:16:15.163 clat (msec): min=19, max=5058, avg=1491.83, stdev=1535.92 00:16:15.163 lat (msec): min=26, max=5060, avg=1504.63, stdev=1540.59 00:16:15.163 clat percentiles (msec): 00:16:15.163 | 1.00th=[ 53], 5.00th=[ 309], 10.00th=[ 531], 20.00th=[ 701], 00:16:15.163 | 30.00th=[ 726], 40.00th=[ 743], 50.00th=[ 776], 60.00th=[ 802], 00:16:15.163 | 70.00th=[ 1116], 80.00th=[ 1770], 90.00th=[ 4933], 95.00th=[ 4933], 00:16:15.163 | 99.00th=[ 5000], 99.50th=[ 5067], 99.90th=[ 5067], 99.95th=[ 5067], 00:16:15.163 | 99.99th=[ 5067] 00:16:15.163 bw ( KiB/s): min=14336, max=190464, per=4.75%, avg=133939.20, stdev=58980.19, samples=10 00:16:15.163 iops : min= 14, max= 186, avg=130.80, stdev=57.60, samples=10 00:16:15.163 lat (msec) : 20=0.13%, 50=0.77%, 100=0.77%, 250=1.79%, 500=6.02% 00:16:15.163 lat (msec) : 750=36.75%, 1000=22.02%, 2000=14.60%, >=2000=17.16% 00:16:15.163 cpu : usr=0.02%, sys=1.56%, ctx=1278, majf=0, minf=32769 00:16:15.163 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.1%, >=64=91.9% 00:16:15.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.163 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:16:15.163 issued rwts: total=781,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.163 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.163 job5: (groupid=0, jobs=1): err= 0: pid=3784784: Thu Nov 7 10:44:42 2024 00:16:15.163 read: IOPS=163, BW=163MiB/s (171MB/s)(1639MiB/10026msec) 00:16:15.163 slat (usec): min=44, max=2072.2k, avg=6100.69, stdev=70790.22 00:16:15.163 clat (msec): min=21, max=5904, avg=732.65, stdev=1283.94 00:16:15.163 lat (msec): min=26, max=5916, avg=738.75, stdev=1291.61 00:16:15.163 clat percentiles (msec): 00:16:15.163 | 1.00th=[ 81], 5.00th=[ 211], 10.00th=[ 213], 20.00th=[ 222], 00:16:15.163 | 30.00th=[ 309], 40.00th=[ 338], 50.00th=[ 363], 60.00th=[ 414], 00:16:15.163 | 70.00th=[ 464], 80.00th=[ 485], 90.00th=[ 550], 95.00th=[ 5000], 00:16:15.163 | 99.00th=[ 5537], 99.50th=[ 5537], 99.90th=[ 5873], 99.95th=[ 5873], 00:16:15.163 | 99.99th=[ 5873] 00:16:15.163 bw ( KiB/s): min= 4096, max=602112, per=9.97%, avg=281506.91, stdev=167371.51, samples=11 00:16:15.163 iops : min= 4, max= 588, avg=274.91, stdev=163.45, samples=11 00:16:15.163 lat (msec) : 50=0.24%, 100=1.22%, 250=21.66%, 500=61.32%, 750=7.02% 00:16:15.163 lat (msec) : 2000=0.55%, >=2000=7.99% 00:16:15.163 cpu : usr=0.03%, sys=2.04%, ctx=3997, majf=0, minf=32769 00:16:15.163 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.2% 00:16:15.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.163 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:15.163 issued rwts: total=1639,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.163 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.163 job5: (groupid=0, jobs=1): err= 0: pid=3784785: Thu Nov 7 10:44:42 2024 00:16:15.163 read: IOPS=85, BW=85.1MiB/s (89.2MB/s)(853MiB/10027msec) 00:16:15.163 slat (usec): min=135, max=2032.8k, avg=11721.07, stdev=91497.70 00:16:15.163 clat (msec): min=23, max=5448, avg=1296.51, stdev=1014.81 00:16:15.163 lat (msec): min=36, max=6801, avg=1308.23, stdev=1026.51 00:16:15.163 clat percentiles (msec): 00:16:15.163 | 1.00th=[ 46], 5.00th=[ 326], 10.00th=[ 477], 20.00th=[ 550], 00:16:15.163 | 30.00th=[ 575], 40.00th=[ 651], 50.00th=[ 768], 60.00th=[ 1011], 00:16:15.163 | 70.00th=[ 1905], 80.00th=[ 2400], 90.00th=[ 2869], 95.00th=[ 3205], 00:16:15.163 | 99.00th=[ 3339], 99.50th=[ 5403], 99.90th=[ 5470], 99.95th=[ 5470], 00:16:15.163 | 99.99th=[ 5470] 00:16:15.163 bw ( KiB/s): min= 8192, max=233472, per=4.39%, avg=123904.00, stdev=80513.90, samples=12 00:16:15.163 iops : min= 8, max= 228, avg=121.00, stdev=78.63, samples=12 00:16:15.163 lat (msec) : 50=1.17%, 100=0.82%, 250=1.29%, 500=7.27%, 750=38.69% 00:16:15.163 lat (msec) : 1000=10.43%, 2000=10.55%, >=2000=29.78% 00:16:15.163 cpu : usr=0.04%, sys=1.77%, ctx=2163, majf=0, minf=32769 00:16:15.163 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=3.8%, >=64=92.6% 00:16:15.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.163 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:15.163 issued rwts: total=853,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.163 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.163 00:16:15.163 Run status group 0 (all jobs): 00:16:15.163 READ: bw=2756MiB/s (2890MB/s), 1102KiB/s-185MiB/s (1128kB/s-194MB/s), io=38.1GiB (40.9GB), run=10011-14155msec 00:16:15.163 00:16:15.163 Disk stats (read/write): 00:16:15.163 nvme0n1: ios=50209/0, merge=0/0, ticks=9940058/0, in_queue=9940058, util=98.19% 00:16:15.163 nvme1n1: ios=41837/0, merge=0/0, ticks=12230862/0, in_queue=12230862, util=98.48% 00:16:15.163 nvme2n1: ios=22833/0, merge=0/0, ticks=11360882/0, in_queue=11360882, util=98.63% 00:16:15.163 nvme3n1: ios=39666/0, merge=0/0, ticks=8381963/0, in_queue=8381963, util=98.88% 00:16:15.163 nvme4n1: ios=40640/0, merge=0/0, ticks=9332406/0, in_queue=9332406, util=99.01% 00:16:15.163 nvme5n1: ios=116369/0, merge=0/0, ticks=8891139/0, in_queue=8891139, util=99.40% 00:16:15.423 10:44:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync 00:16:15.423 10:44:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:16:15.423 10:44:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:16:15.423 10:44:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:16:16.359 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.359 10:44:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:16:16.359 10:44:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1221 -- # local i=0 00:16:16.359 10:44:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:16:16.359 10:44:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1222 -- # grep -q -w SPDK00000000000000 00:16:16.359 10:44:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:16:16.359 10:44:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # grep -q -w SPDK00000000000000 00:16:16.359 10:44:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1233 -- # return 0 00:16:16.359 10:44:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:16.359 10:44:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.359 10:44:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:16.359 10:44:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.359 10:44:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:16:16.359 10:44:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:17.736 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:17.736 10:44:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:16:17.736 10:44:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1221 -- # local i=0 00:16:17.736 10:44:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:16:17.736 10:44:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1222 -- # grep -q -w SPDK00000000000001 00:16:17.736 10:44:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:16:17.736 10:44:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # grep -q -w SPDK00000000000001 00:16:17.736 10:44:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1233 -- # return 0 00:16:17.736 10:44:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:17.736 10:44:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.736 10:44:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:17.736 10:44:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.736 10:44:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:16:17.736 10:44:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:16:18.673 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:16:18.673 10:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:16:18.673 10:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1221 -- # local i=0 00:16:18.673 10:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:16:18.673 10:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1222 -- # grep -q -w SPDK00000000000002 00:16:18.673 10:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:16:18.673 10:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # grep -q -w SPDK00000000000002 00:16:18.673 10:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1233 -- # return 0 00:16:18.673 10:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:18.673 10:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.673 10:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:18.673 10:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.673 10:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:16:18.673 10:44:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:16:19.609 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:16:19.609 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:16:19.609 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1221 -- # local i=0 00:16:19.609 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:16:19.609 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1222 -- # grep -q -w SPDK00000000000003 00:16:19.609 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:16:19.609 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # grep -q -w SPDK00000000000003 00:16:19.609 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1233 -- # return 0 00:16:19.609 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:16:19.609 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.609 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:19.609 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.609 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:16:19.609 10:44:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:16:20.548 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:16:20.548 10:44:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:16:20.548 10:44:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1221 -- # local i=0 00:16:20.548 10:44:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:16:20.548 10:44:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1222 -- # grep -q -w SPDK00000000000004 00:16:20.548 10:44:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:16:20.548 10:44:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # grep -q -w SPDK00000000000004 00:16:20.548 10:44:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1233 -- # return 0 00:16:20.548 10:44:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:16:20.548 10:44:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.548 10:44:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:20.548 10:44:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.548 10:44:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:16:20.548 10:44:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:16:21.485 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:16:21.485 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:16:21.485 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1221 -- # local i=0 00:16:21.485 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:16:21.485 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1222 -- # grep -q -w SPDK00000000000005 00:16:21.485 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:16:21.485 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # grep -q -w SPDK00000000000005 00:16:21.485 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1233 -- # return 0 00:16:21.485 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:16:21.485 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.485 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:21.485 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.485 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:21.485 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:16:21.485 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:21.485 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@121 -- # sync 00:16:21.485 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:16:21.485 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:16:21.485 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@124 -- # set +e 00:16:21.485 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:21.485 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:16:21.485 rmmod nvme_rdma 00:16:21.485 rmmod nvme_fabrics 00:16:21.485 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:21.485 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@128 -- # set -e 00:16:21.485 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@129 -- # return 0 00:16:21.485 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@517 -- # '[' -n 3783155 ']' 00:16:21.485 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@518 -- # killprocess 3783155 00:16:21.485 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@952 -- # '[' -z 3783155 ']' 00:16:21.485 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@956 -- # kill -0 3783155 00:16:21.485 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@957 -- # uname 00:16:21.485 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:21.485 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3783155 00:16:21.744 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:21.744 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:21.744 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3783155' 00:16:21.744 killing process with pid 3783155 00:16:21.744 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@971 -- # kill 3783155 00:16:21.744 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@976 -- # wait 3783155 00:16:22.004 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:22.004 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:16:22.004 00:16:22.004 real 0m35.822s 00:16:22.004 user 2m4.810s 00:16:22.004 sys 0m16.729s 00:16:22.004 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:22.004 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:22.004 ************************************ 00:16:22.004 END TEST nvmf_srq_overwhelm 00:16:22.004 ************************************ 00:16:22.004 10:44:49 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:16:22.004 10:44:49 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:22.004 10:44:49 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:22.004 10:44:49 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:22.004 ************************************ 00:16:22.004 START TEST nvmf_shutdown 00:16:22.004 ************************************ 00:16:22.004 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:16:22.264 * Looking for test storage... 00:16:22.264 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:22.264 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:22.264 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:16:22.264 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:22.264 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:22.264 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:22.264 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:22.264 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:22.264 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:16:22.264 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:16:22.264 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:16:22.264 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:16:22.264 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:16:22.264 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:16:22.264 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:16:22.264 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:22.264 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:16:22.264 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:16:22.264 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:22.264 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:22.264 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:16:22.264 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:16:22.264 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:22.264 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:16:22.264 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:16:22.264 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:16:22.264 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:16:22.264 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:22.264 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:16:22.264 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:16:22.264 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:22.264 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:22.264 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:16:22.264 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:22.264 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:22.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.264 --rc genhtml_branch_coverage=1 00:16:22.264 --rc genhtml_function_coverage=1 00:16:22.264 --rc genhtml_legend=1 00:16:22.264 --rc geninfo_all_blocks=1 00:16:22.264 --rc geninfo_unexecuted_blocks=1 00:16:22.264 00:16:22.264 ' 00:16:22.264 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:22.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.264 --rc genhtml_branch_coverage=1 00:16:22.264 --rc genhtml_function_coverage=1 00:16:22.264 --rc genhtml_legend=1 00:16:22.264 --rc geninfo_all_blocks=1 00:16:22.264 --rc geninfo_unexecuted_blocks=1 00:16:22.264 00:16:22.264 ' 00:16:22.264 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:22.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.264 --rc genhtml_branch_coverage=1 00:16:22.264 --rc genhtml_function_coverage=1 00:16:22.264 --rc genhtml_legend=1 00:16:22.264 --rc geninfo_all_blocks=1 00:16:22.264 --rc geninfo_unexecuted_blocks=1 00:16:22.264 00:16:22.264 ' 00:16:22.264 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:22.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.264 --rc genhtml_branch_coverage=1 00:16:22.264 --rc genhtml_function_coverage=1 00:16:22.264 --rc genhtml_legend=1 00:16:22.264 --rc geninfo_all_blocks=1 00:16:22.264 --rc geninfo_unexecuted_blocks=1 00:16:22.264 00:16:22.264 ' 00:16:22.264 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:22.264 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:16:22.264 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:22.264 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:22.264 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:22.264 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:22.265 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:16:22.265 ************************************ 00:16:22.265 START TEST nvmf_shutdown_tc1 00:16:22.265 ************************************ 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc1 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:16:22.265 10:44:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:16:28.835 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:16:28.835 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:16:28.835 Found net devices under 0000:d9:00.0: mlx_0_0 00:16:28.835 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:16:28.836 Found net devices under 0000:d9:00.1: mlx_0_1 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # rdma_device_init 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # uname 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe ib_core 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:16:28.836 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:28.836 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:16:28.836 altname enp217s0f0np0 00:16:28.836 altname ens818f0np0 00:16:28.836 inet 192.168.100.8/24 scope global mlx_0_0 00:16:28.836 valid_lft forever preferred_lft forever 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:16:28.836 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:28.836 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:16:28.836 altname enp217s0f1np1 00:16:28.836 altname ens818f1np1 00:16:28.836 inet 192.168.100.9/24 scope global mlx_0_1 00:16:28.836 valid_lft forever preferred_lft forever 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:28.836 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:28.837 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:28.837 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:28.837 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:16:28.837 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:28.837 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:28.837 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:28.837 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:28.837 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:16:28.837 192.168.100.9' 00:16:28.837 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:16:28.837 192.168.100.9' 00:16:28.837 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # head -n 1 00:16:28.837 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:28.837 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:16:28.837 192.168.100.9' 00:16:28.837 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # tail -n +2 00:16:28.837 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # head -n 1 00:16:28.837 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:28.837 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:16:29.097 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:29.097 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:16:29.097 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:16:29.097 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:16:29.097 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:16:29.097 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:29.097 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:29.097 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:29.097 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3791279 00:16:29.097 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3791279 00:16:29.097 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:29.097 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 3791279 ']' 00:16:29.097 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.097 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:29.097 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:29.097 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:29.097 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:29.097 [2024-11-07 10:44:56.594456] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:16:29.097 [2024-11-07 10:44:56.594519] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:29.097 [2024-11-07 10:44:56.673438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:29.097 [2024-11-07 10:44:56.712591] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:29.097 [2024-11-07 10:44:56.712649] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:29.097 [2024-11-07 10:44:56.712658] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:29.097 [2024-11-07 10:44:56.712667] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:29.097 [2024-11-07 10:44:56.712678] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:29.097 [2024-11-07 10:44:56.714370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:29.097 [2024-11-07 10:44:56.714434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:29.097 [2024-11-07 10:44:56.714563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:29.097 [2024-11-07 10:44:56.714564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:16:29.356 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:29.356 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:16:29.356 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:29.356 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:29.356 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:29.356 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:29.356 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:29.356 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.356 10:44:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:29.356 [2024-11-07 10:44:56.897453] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x12390f0/0x123d5e0) succeed. 00:16:29.356 [2024-11-07 10:44:56.906716] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x123a780/0x127ec80) succeed. 00:16:29.615 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.615 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:16:29.615 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:16:29.615 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:29.615 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:29.615 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:29.615 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:29.615 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:16:29.615 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:29.615 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:16:29.615 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:29.615 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:16:29.615 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:29.616 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:16:29.616 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:29.616 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:16:29.616 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:29.616 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:16:29.616 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:29.616 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:16:29.616 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:29.616 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:16:29.616 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:29.616 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:16:29.616 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:29.616 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:16:29.616 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:16:29.616 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.616 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:29.616 Malloc1 00:16:29.616 [2024-11-07 10:44:57.144139] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:29.616 Malloc2 00:16:29.616 Malloc3 00:16:29.616 Malloc4 00:16:29.875 Malloc5 00:16:29.875 Malloc6 00:16:29.875 Malloc7 00:16:29.875 Malloc8 00:16:29.875 Malloc9 00:16:29.875 Malloc10 00:16:29.875 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.875 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:16:29.875 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:29.875 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:30.135 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3791579 00:16:30.135 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3791579 /var/tmp/bdevperf.sock 00:16:30.135 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 3791579 ']' 00:16:30.135 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:30.135 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:30.135 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:16:30.135 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:16:30.135 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:30.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:30.135 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:30.135 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:16:30.135 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:30.136 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:16:30.136 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:30.136 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:30.136 { 00:16:30.136 "params": { 00:16:30.136 "name": "Nvme$subsystem", 00:16:30.136 "trtype": "$TEST_TRANSPORT", 00:16:30.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:30.136 "adrfam": "ipv4", 00:16:30.136 "trsvcid": "$NVMF_PORT", 00:16:30.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:30.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:30.136 "hdgst": ${hdgst:-false}, 00:16:30.136 "ddgst": ${ddgst:-false} 00:16:30.136 }, 00:16:30.136 "method": "bdev_nvme_attach_controller" 00:16:30.136 } 00:16:30.136 EOF 00:16:30.136 )") 00:16:30.136 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:16:30.136 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:30.136 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:30.136 { 00:16:30.136 "params": { 00:16:30.136 "name": "Nvme$subsystem", 00:16:30.136 "trtype": "$TEST_TRANSPORT", 00:16:30.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:30.136 "adrfam": "ipv4", 00:16:30.136 "trsvcid": "$NVMF_PORT", 00:16:30.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:30.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:30.136 "hdgst": ${hdgst:-false}, 00:16:30.136 "ddgst": ${ddgst:-false} 00:16:30.136 }, 00:16:30.136 "method": "bdev_nvme_attach_controller" 00:16:30.136 } 00:16:30.136 EOF 00:16:30.136 )") 00:16:30.136 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:16:30.136 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:30.136 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:30.136 { 00:16:30.136 "params": { 00:16:30.136 "name": "Nvme$subsystem", 00:16:30.136 "trtype": "$TEST_TRANSPORT", 00:16:30.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:30.136 "adrfam": "ipv4", 00:16:30.136 "trsvcid": "$NVMF_PORT", 00:16:30.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:30.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:30.136 "hdgst": ${hdgst:-false}, 00:16:30.136 "ddgst": ${ddgst:-false} 00:16:30.136 }, 00:16:30.136 "method": "bdev_nvme_attach_controller" 00:16:30.136 } 00:16:30.136 EOF 00:16:30.136 )") 00:16:30.136 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:16:30.136 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:30.136 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:30.136 { 00:16:30.136 "params": { 00:16:30.136 "name": "Nvme$subsystem", 00:16:30.136 "trtype": "$TEST_TRANSPORT", 00:16:30.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:30.136 "adrfam": "ipv4", 00:16:30.136 "trsvcid": "$NVMF_PORT", 00:16:30.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:30.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:30.136 "hdgst": ${hdgst:-false}, 00:16:30.136 "ddgst": ${ddgst:-false} 00:16:30.136 }, 00:16:30.136 "method": "bdev_nvme_attach_controller" 00:16:30.136 } 00:16:30.136 EOF 00:16:30.136 )") 00:16:30.136 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:16:30.136 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:30.136 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:30.136 { 00:16:30.136 "params": { 00:16:30.136 "name": "Nvme$subsystem", 00:16:30.136 "trtype": "$TEST_TRANSPORT", 00:16:30.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:30.136 "adrfam": "ipv4", 00:16:30.136 "trsvcid": "$NVMF_PORT", 00:16:30.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:30.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:30.136 "hdgst": ${hdgst:-false}, 00:16:30.136 "ddgst": ${ddgst:-false} 00:16:30.136 }, 00:16:30.136 "method": "bdev_nvme_attach_controller" 00:16:30.136 } 00:16:30.136 EOF 00:16:30.136 )") 00:16:30.136 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:16:30.136 [2024-11-07 10:44:57.634009] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:16:30.136 [2024-11-07 10:44:57.634060] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:16:30.136 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:30.136 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:30.136 { 00:16:30.136 "params": { 00:16:30.136 "name": "Nvme$subsystem", 00:16:30.136 "trtype": "$TEST_TRANSPORT", 00:16:30.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:30.136 "adrfam": "ipv4", 00:16:30.136 "trsvcid": "$NVMF_PORT", 00:16:30.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:30.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:30.136 "hdgst": ${hdgst:-false}, 00:16:30.136 "ddgst": ${ddgst:-false} 00:16:30.136 }, 00:16:30.136 "method": "bdev_nvme_attach_controller" 00:16:30.136 } 00:16:30.136 EOF 00:16:30.136 )") 00:16:30.136 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:16:30.136 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:30.136 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:30.136 { 00:16:30.136 "params": { 00:16:30.136 "name": "Nvme$subsystem", 00:16:30.136 "trtype": "$TEST_TRANSPORT", 00:16:30.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:30.136 "adrfam": "ipv4", 00:16:30.136 "trsvcid": "$NVMF_PORT", 00:16:30.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:30.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:30.136 "hdgst": ${hdgst:-false}, 00:16:30.136 "ddgst": ${ddgst:-false} 00:16:30.136 }, 00:16:30.136 "method": "bdev_nvme_attach_controller" 00:16:30.136 } 00:16:30.136 EOF 00:16:30.136 )") 00:16:30.136 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:16:30.136 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:30.136 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:30.136 { 00:16:30.136 "params": { 00:16:30.136 "name": "Nvme$subsystem", 00:16:30.136 "trtype": "$TEST_TRANSPORT", 00:16:30.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:30.136 "adrfam": "ipv4", 00:16:30.136 "trsvcid": "$NVMF_PORT", 00:16:30.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:30.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:30.136 "hdgst": ${hdgst:-false}, 00:16:30.136 "ddgst": ${ddgst:-false} 00:16:30.136 }, 00:16:30.136 "method": "bdev_nvme_attach_controller" 00:16:30.136 } 00:16:30.136 EOF 00:16:30.136 )") 00:16:30.136 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:16:30.136 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:30.136 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:30.136 { 00:16:30.136 "params": { 00:16:30.136 "name": "Nvme$subsystem", 00:16:30.136 "trtype": "$TEST_TRANSPORT", 00:16:30.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:30.136 "adrfam": "ipv4", 00:16:30.136 "trsvcid": "$NVMF_PORT", 00:16:30.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:30.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:30.136 "hdgst": ${hdgst:-false}, 00:16:30.136 "ddgst": ${ddgst:-false} 00:16:30.136 }, 00:16:30.136 "method": "bdev_nvme_attach_controller" 00:16:30.136 } 00:16:30.136 EOF 00:16:30.136 )") 00:16:30.136 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:16:30.136 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:30.136 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:30.136 { 00:16:30.136 "params": { 00:16:30.136 "name": "Nvme$subsystem", 00:16:30.136 "trtype": "$TEST_TRANSPORT", 00:16:30.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:30.136 "adrfam": "ipv4", 00:16:30.136 "trsvcid": "$NVMF_PORT", 00:16:30.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:30.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:30.136 "hdgst": ${hdgst:-false}, 00:16:30.136 "ddgst": ${ddgst:-false} 00:16:30.136 }, 00:16:30.136 "method": "bdev_nvme_attach_controller" 00:16:30.136 } 00:16:30.136 EOF 00:16:30.136 )") 00:16:30.136 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:16:30.136 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:16:30.136 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:16:30.136 10:44:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:16:30.137 "params": { 00:16:30.137 "name": "Nvme1", 00:16:30.137 "trtype": "rdma", 00:16:30.137 "traddr": "192.168.100.8", 00:16:30.137 "adrfam": "ipv4", 00:16:30.137 "trsvcid": "4420", 00:16:30.137 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:30.137 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:30.137 "hdgst": false, 00:16:30.137 "ddgst": false 00:16:30.137 }, 00:16:30.137 "method": "bdev_nvme_attach_controller" 00:16:30.137 },{ 00:16:30.137 "params": { 00:16:30.137 "name": "Nvme2", 00:16:30.137 "trtype": "rdma", 00:16:30.137 "traddr": "192.168.100.8", 00:16:30.137 "adrfam": "ipv4", 00:16:30.137 "trsvcid": "4420", 00:16:30.137 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:30.137 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:30.137 "hdgst": false, 00:16:30.137 "ddgst": false 00:16:30.137 }, 00:16:30.137 "method": "bdev_nvme_attach_controller" 00:16:30.137 },{ 00:16:30.137 "params": { 00:16:30.137 "name": "Nvme3", 00:16:30.137 "trtype": "rdma", 00:16:30.137 "traddr": "192.168.100.8", 00:16:30.137 "adrfam": "ipv4", 00:16:30.137 "trsvcid": "4420", 00:16:30.137 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:16:30.137 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:16:30.137 "hdgst": false, 00:16:30.137 "ddgst": false 00:16:30.137 }, 00:16:30.137 "method": "bdev_nvme_attach_controller" 00:16:30.137 },{ 00:16:30.137 "params": { 00:16:30.137 "name": "Nvme4", 00:16:30.137 "trtype": "rdma", 00:16:30.137 "traddr": "192.168.100.8", 00:16:30.137 "adrfam": "ipv4", 00:16:30.137 "trsvcid": "4420", 00:16:30.137 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:16:30.137 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:16:30.137 "hdgst": false, 00:16:30.137 "ddgst": false 00:16:30.137 }, 00:16:30.137 "method": "bdev_nvme_attach_controller" 00:16:30.137 },{ 00:16:30.137 "params": { 00:16:30.137 "name": "Nvme5", 00:16:30.137 "trtype": "rdma", 00:16:30.137 "traddr": "192.168.100.8", 00:16:30.137 "adrfam": "ipv4", 00:16:30.137 "trsvcid": "4420", 00:16:30.137 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:16:30.137 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:16:30.137 "hdgst": false, 00:16:30.137 "ddgst": false 00:16:30.137 }, 00:16:30.137 "method": "bdev_nvme_attach_controller" 00:16:30.137 },{ 00:16:30.137 "params": { 00:16:30.137 "name": "Nvme6", 00:16:30.137 "trtype": "rdma", 00:16:30.137 "traddr": "192.168.100.8", 00:16:30.137 "adrfam": "ipv4", 00:16:30.137 "trsvcid": "4420", 00:16:30.137 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:16:30.137 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:16:30.137 "hdgst": false, 00:16:30.137 "ddgst": false 00:16:30.137 }, 00:16:30.137 "method": "bdev_nvme_attach_controller" 00:16:30.137 },{ 00:16:30.137 "params": { 00:16:30.137 "name": "Nvme7", 00:16:30.137 "trtype": "rdma", 00:16:30.137 "traddr": "192.168.100.8", 00:16:30.137 "adrfam": "ipv4", 00:16:30.137 "trsvcid": "4420", 00:16:30.137 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:16:30.137 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:16:30.137 "hdgst": false, 00:16:30.137 "ddgst": false 00:16:30.137 }, 00:16:30.137 "method": "bdev_nvme_attach_controller" 00:16:30.137 },{ 00:16:30.137 "params": { 00:16:30.137 "name": "Nvme8", 00:16:30.137 "trtype": "rdma", 00:16:30.137 "traddr": "192.168.100.8", 00:16:30.137 "adrfam": "ipv4", 00:16:30.137 "trsvcid": "4420", 00:16:30.137 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:16:30.137 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:16:30.137 "hdgst": false, 00:16:30.137 "ddgst": false 00:16:30.137 }, 00:16:30.137 "method": "bdev_nvme_attach_controller" 00:16:30.137 },{ 00:16:30.137 "params": { 00:16:30.137 "name": "Nvme9", 00:16:30.137 "trtype": "rdma", 00:16:30.137 "traddr": "192.168.100.8", 00:16:30.137 "adrfam": "ipv4", 00:16:30.137 "trsvcid": "4420", 00:16:30.137 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:16:30.137 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:16:30.137 "hdgst": false, 00:16:30.137 "ddgst": false 00:16:30.137 }, 00:16:30.137 "method": "bdev_nvme_attach_controller" 00:16:30.137 },{ 00:16:30.137 "params": { 00:16:30.137 "name": "Nvme10", 00:16:30.137 "trtype": "rdma", 00:16:30.137 "traddr": "192.168.100.8", 00:16:30.137 "adrfam": "ipv4", 00:16:30.137 "trsvcid": "4420", 00:16:30.137 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:16:30.137 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:16:30.137 "hdgst": false, 00:16:30.137 "ddgst": false 00:16:30.137 }, 00:16:30.137 "method": "bdev_nvme_attach_controller" 00:16:30.137 }' 00:16:30.137 [2024-11-07 10:44:57.713279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.137 [2024-11-07 10:44:57.752831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.074 10:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:31.074 10:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:16:31.074 10:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:31.074 10:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.074 10:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:31.074 10:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.074 10:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3791579 00:16:31.074 10:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:16:31.074 10:44:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:16:32.010 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3791579 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:16:32.010 10:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3791279 00:16:32.010 10:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:32.010 10:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:16:32.010 10:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:16:32.010 10:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:16:32.010 10:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:32.010 10:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:32.010 { 00:16:32.010 "params": { 00:16:32.010 "name": "Nvme$subsystem", 00:16:32.010 "trtype": "$TEST_TRANSPORT", 00:16:32.010 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:32.010 "adrfam": "ipv4", 00:16:32.010 "trsvcid": "$NVMF_PORT", 00:16:32.010 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:32.010 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:32.010 "hdgst": ${hdgst:-false}, 00:16:32.010 "ddgst": ${ddgst:-false} 00:16:32.010 }, 00:16:32.010 "method": "bdev_nvme_attach_controller" 00:16:32.010 } 00:16:32.010 EOF 00:16:32.010 )") 00:16:32.010 10:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:16:32.011 10:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:32.011 10:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:32.011 { 00:16:32.011 "params": { 00:16:32.011 "name": "Nvme$subsystem", 00:16:32.011 "trtype": "$TEST_TRANSPORT", 00:16:32.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:32.011 "adrfam": "ipv4", 00:16:32.011 "trsvcid": "$NVMF_PORT", 00:16:32.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:32.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:32.011 "hdgst": ${hdgst:-false}, 00:16:32.011 "ddgst": ${ddgst:-false} 00:16:32.011 }, 00:16:32.011 "method": "bdev_nvme_attach_controller" 00:16:32.011 } 00:16:32.011 EOF 00:16:32.011 )") 00:16:32.011 10:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:16:32.011 10:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:32.011 10:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:32.011 { 00:16:32.011 "params": { 00:16:32.011 "name": "Nvme$subsystem", 00:16:32.011 "trtype": "$TEST_TRANSPORT", 00:16:32.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:32.011 "adrfam": "ipv4", 00:16:32.011 "trsvcid": "$NVMF_PORT", 00:16:32.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:32.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:32.011 "hdgst": ${hdgst:-false}, 00:16:32.011 "ddgst": ${ddgst:-false} 00:16:32.011 }, 00:16:32.011 "method": "bdev_nvme_attach_controller" 00:16:32.011 } 00:16:32.011 EOF 00:16:32.011 )") 00:16:32.011 10:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:16:32.011 10:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:32.011 10:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:32.011 { 00:16:32.011 "params": { 00:16:32.011 "name": "Nvme$subsystem", 00:16:32.011 "trtype": "$TEST_TRANSPORT", 00:16:32.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:32.011 "adrfam": "ipv4", 00:16:32.011 "trsvcid": "$NVMF_PORT", 00:16:32.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:32.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:32.011 "hdgst": ${hdgst:-false}, 00:16:32.011 "ddgst": ${ddgst:-false} 00:16:32.011 }, 00:16:32.011 "method": "bdev_nvme_attach_controller" 00:16:32.011 } 00:16:32.011 EOF 00:16:32.011 )") 00:16:32.011 10:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:16:32.011 10:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:32.011 10:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:32.011 { 00:16:32.011 "params": { 00:16:32.011 "name": "Nvme$subsystem", 00:16:32.011 "trtype": "$TEST_TRANSPORT", 00:16:32.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:32.011 "adrfam": "ipv4", 00:16:32.011 "trsvcid": "$NVMF_PORT", 00:16:32.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:32.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:32.011 "hdgst": ${hdgst:-false}, 00:16:32.011 "ddgst": ${ddgst:-false} 00:16:32.011 }, 00:16:32.011 "method": "bdev_nvme_attach_controller" 00:16:32.011 } 00:16:32.011 EOF 00:16:32.011 )") 00:16:32.011 10:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:16:32.011 10:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:32.011 10:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:32.011 { 00:16:32.011 "params": { 00:16:32.011 "name": "Nvme$subsystem", 00:16:32.011 "trtype": "$TEST_TRANSPORT", 00:16:32.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:32.011 "adrfam": "ipv4", 00:16:32.011 "trsvcid": "$NVMF_PORT", 00:16:32.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:32.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:32.011 "hdgst": ${hdgst:-false}, 00:16:32.011 "ddgst": ${ddgst:-false} 00:16:32.011 }, 00:16:32.011 "method": "bdev_nvme_attach_controller" 00:16:32.011 } 00:16:32.011 EOF 00:16:32.011 )") 00:16:32.011 10:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:16:32.011 [2024-11-07 10:44:59.669002] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:16:32.011 [2024-11-07 10:44:59.669055] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3791869 ] 00:16:32.011 10:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:32.011 10:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:32.011 { 00:16:32.011 "params": { 00:16:32.011 "name": "Nvme$subsystem", 00:16:32.011 "trtype": "$TEST_TRANSPORT", 00:16:32.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:32.011 "adrfam": "ipv4", 00:16:32.011 "trsvcid": "$NVMF_PORT", 00:16:32.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:32.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:32.011 "hdgst": ${hdgst:-false}, 00:16:32.011 "ddgst": ${ddgst:-false} 00:16:32.011 }, 00:16:32.011 "method": "bdev_nvme_attach_controller" 00:16:32.011 } 00:16:32.011 EOF 00:16:32.011 )") 00:16:32.011 10:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:16:32.270 10:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:32.270 10:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:32.270 { 00:16:32.270 "params": { 00:16:32.270 "name": "Nvme$subsystem", 00:16:32.270 "trtype": "$TEST_TRANSPORT", 00:16:32.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:32.270 "adrfam": "ipv4", 00:16:32.270 "trsvcid": "$NVMF_PORT", 00:16:32.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:32.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:32.270 "hdgst": ${hdgst:-false}, 00:16:32.270 "ddgst": ${ddgst:-false} 00:16:32.270 }, 00:16:32.270 "method": "bdev_nvme_attach_controller" 00:16:32.270 } 00:16:32.270 EOF 00:16:32.270 )") 00:16:32.270 10:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:16:32.270 10:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:32.270 10:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:32.270 { 00:16:32.270 "params": { 00:16:32.270 "name": "Nvme$subsystem", 00:16:32.270 "trtype": "$TEST_TRANSPORT", 00:16:32.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:32.270 "adrfam": "ipv4", 00:16:32.270 "trsvcid": "$NVMF_PORT", 00:16:32.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:32.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:32.270 "hdgst": ${hdgst:-false}, 00:16:32.270 "ddgst": ${ddgst:-false} 00:16:32.270 }, 00:16:32.270 "method": "bdev_nvme_attach_controller" 00:16:32.270 } 00:16:32.270 EOF 00:16:32.270 )") 00:16:32.270 10:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:16:32.270 10:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:32.270 10:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:32.270 { 00:16:32.270 "params": { 00:16:32.270 "name": "Nvme$subsystem", 00:16:32.270 "trtype": "$TEST_TRANSPORT", 00:16:32.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:32.270 "adrfam": "ipv4", 00:16:32.270 "trsvcid": "$NVMF_PORT", 00:16:32.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:32.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:32.270 "hdgst": ${hdgst:-false}, 00:16:32.270 "ddgst": ${ddgst:-false} 00:16:32.270 }, 00:16:32.270 "method": "bdev_nvme_attach_controller" 00:16:32.270 } 00:16:32.270 EOF 00:16:32.270 )") 00:16:32.270 10:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:16:32.270 10:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:16:32.270 10:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:16:32.270 10:44:59 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:16:32.270 "params": { 00:16:32.270 "name": "Nvme1", 00:16:32.270 "trtype": "rdma", 00:16:32.270 "traddr": "192.168.100.8", 00:16:32.270 "adrfam": "ipv4", 00:16:32.270 "trsvcid": "4420", 00:16:32.270 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:32.270 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:32.270 "hdgst": false, 00:16:32.270 "ddgst": false 00:16:32.270 }, 00:16:32.270 "method": "bdev_nvme_attach_controller" 00:16:32.270 },{ 00:16:32.270 "params": { 00:16:32.270 "name": "Nvme2", 00:16:32.270 "trtype": "rdma", 00:16:32.270 "traddr": "192.168.100.8", 00:16:32.270 "adrfam": "ipv4", 00:16:32.270 "trsvcid": "4420", 00:16:32.270 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:32.270 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:32.270 "hdgst": false, 00:16:32.270 "ddgst": false 00:16:32.270 }, 00:16:32.270 "method": "bdev_nvme_attach_controller" 00:16:32.270 },{ 00:16:32.270 "params": { 00:16:32.270 "name": "Nvme3", 00:16:32.270 "trtype": "rdma", 00:16:32.270 "traddr": "192.168.100.8", 00:16:32.270 "adrfam": "ipv4", 00:16:32.270 "trsvcid": "4420", 00:16:32.270 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:16:32.270 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:16:32.270 "hdgst": false, 00:16:32.270 "ddgst": false 00:16:32.270 }, 00:16:32.270 "method": "bdev_nvme_attach_controller" 00:16:32.270 },{ 00:16:32.270 "params": { 00:16:32.270 "name": "Nvme4", 00:16:32.270 "trtype": "rdma", 00:16:32.270 "traddr": "192.168.100.8", 00:16:32.270 "adrfam": "ipv4", 00:16:32.270 "trsvcid": "4420", 00:16:32.270 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:16:32.270 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:16:32.270 "hdgst": false, 00:16:32.270 "ddgst": false 00:16:32.270 }, 00:16:32.270 "method": "bdev_nvme_attach_controller" 00:16:32.270 },{ 00:16:32.270 "params": { 00:16:32.270 "name": "Nvme5", 00:16:32.270 "trtype": "rdma", 00:16:32.270 "traddr": "192.168.100.8", 00:16:32.270 "adrfam": "ipv4", 00:16:32.270 "trsvcid": "4420", 00:16:32.270 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:16:32.270 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:16:32.270 "hdgst": false, 00:16:32.270 "ddgst": false 00:16:32.270 }, 00:16:32.270 "method": "bdev_nvme_attach_controller" 00:16:32.270 },{ 00:16:32.270 "params": { 00:16:32.270 "name": "Nvme6", 00:16:32.270 "trtype": "rdma", 00:16:32.270 "traddr": "192.168.100.8", 00:16:32.270 "adrfam": "ipv4", 00:16:32.270 "trsvcid": "4420", 00:16:32.270 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:16:32.270 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:16:32.270 "hdgst": false, 00:16:32.270 "ddgst": false 00:16:32.270 }, 00:16:32.270 "method": "bdev_nvme_attach_controller" 00:16:32.270 },{ 00:16:32.270 "params": { 00:16:32.270 "name": "Nvme7", 00:16:32.270 "trtype": "rdma", 00:16:32.270 "traddr": "192.168.100.8", 00:16:32.270 "adrfam": "ipv4", 00:16:32.270 "trsvcid": "4420", 00:16:32.270 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:16:32.271 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:16:32.271 "hdgst": false, 00:16:32.271 "ddgst": false 00:16:32.271 }, 00:16:32.271 "method": "bdev_nvme_attach_controller" 00:16:32.271 },{ 00:16:32.271 "params": { 00:16:32.271 "name": "Nvme8", 00:16:32.271 "trtype": "rdma", 00:16:32.271 "traddr": "192.168.100.8", 00:16:32.271 "adrfam": "ipv4", 00:16:32.271 "trsvcid": "4420", 00:16:32.271 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:16:32.271 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:16:32.271 "hdgst": false, 00:16:32.271 "ddgst": false 00:16:32.271 }, 00:16:32.271 "method": "bdev_nvme_attach_controller" 00:16:32.271 },{ 00:16:32.271 "params": { 00:16:32.271 "name": "Nvme9", 00:16:32.271 "trtype": "rdma", 00:16:32.271 "traddr": "192.168.100.8", 00:16:32.271 "adrfam": "ipv4", 00:16:32.271 "trsvcid": "4420", 00:16:32.271 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:16:32.271 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:16:32.271 "hdgst": false, 00:16:32.271 "ddgst": false 00:16:32.271 }, 00:16:32.271 "method": "bdev_nvme_attach_controller" 00:16:32.271 },{ 00:16:32.271 "params": { 00:16:32.271 "name": "Nvme10", 00:16:32.271 "trtype": "rdma", 00:16:32.271 "traddr": "192.168.100.8", 00:16:32.271 "adrfam": "ipv4", 00:16:32.271 "trsvcid": "4420", 00:16:32.271 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:16:32.271 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:16:32.271 "hdgst": false, 00:16:32.271 "ddgst": false 00:16:32.271 }, 00:16:32.271 "method": "bdev_nvme_attach_controller" 00:16:32.271 }' 00:16:32.271 [2024-11-07 10:44:59.748839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.271 [2024-11-07 10:44:59.788414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.207 Running I/O for 1 seconds... 00:16:34.586 3570.00 IOPS, 223.12 MiB/s 00:16:34.586 Latency(us) 00:16:34.586 [2024-11-07T09:45:02.257Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:34.586 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:34.586 Verification LBA range: start 0x0 length 0x400 00:16:34.586 Nvme1n1 : 1.19 377.70 23.61 0.00 0.00 167157.12 9594.47 214748.36 00:16:34.586 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:34.586 Verification LBA range: start 0x0 length 0x400 00:16:34.586 Nvme2n1 : 1.19 377.32 23.58 0.00 0.00 164679.74 9961.47 205520.90 00:16:34.586 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:34.586 Verification LBA range: start 0x0 length 0x400 00:16:34.586 Nvme3n1 : 1.19 403.86 25.24 0.00 0.00 151819.14 5740.95 142606.34 00:16:34.586 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:34.586 Verification LBA range: start 0x0 length 0x400 00:16:34.586 Nvme4n1 : 1.19 403.47 25.22 0.00 0.00 149973.29 10171.19 135895.45 00:16:34.586 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:34.586 Verification LBA range: start 0x0 length 0x400 00:16:34.586 Nvme5n1 : 1.19 388.88 24.31 0.00 0.00 153172.66 10433.33 124990.26 00:16:34.586 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:34.586 Verification LBA range: start 0x0 length 0x400 00:16:34.586 Nvme6n1 : 1.19 402.76 25.17 0.00 0.00 146280.83 10590.62 118279.37 00:16:34.586 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:34.586 Verification LBA range: start 0x0 length 0x400 00:16:34.586 Nvme7n1 : 1.19 389.88 24.37 0.00 0.00 148756.57 10538.19 108213.04 00:16:34.586 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:34.586 Verification LBA range: start 0x0 length 0x400 00:16:34.586 Nvme8n1 : 1.19 402.06 25.13 0.00 0.00 142533.04 10747.90 101082.73 00:16:34.586 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:34.586 Verification LBA range: start 0x0 length 0x400 00:16:34.586 Nvme9n1 : 1.18 378.39 23.65 0.00 0.00 149918.52 25060.97 101082.73 00:16:34.586 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:34.586 Verification LBA range: start 0x0 length 0x400 00:16:34.586 Nvme10n1 : 1.20 267.65 16.73 0.00 0.00 209026.33 9856.61 452984.83 00:16:34.586 [2024-11-07T09:45:02.257Z] =================================================================================================================== 00:16:34.586 [2024-11-07T09:45:02.257Z] Total : 3791.97 237.00 0.00 0.00 156537.37 5740.95 452984.83 00:16:34.586 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:16:34.586 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:16:34.586 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:34.586 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:34.586 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:16:34.586 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:34.586 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:16:34.586 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:16:34.586 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:16:34.586 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:16:34.586 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:34.586 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:16:34.586 rmmod nvme_rdma 00:16:34.586 rmmod nvme_fabrics 00:16:34.586 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:34.586 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:16:34.586 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:16:34.586 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3791279 ']' 00:16:34.586 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3791279 00:16:34.586 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' -z 3791279 ']' 00:16:34.586 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # kill -0 3791279 00:16:34.586 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # uname 00:16:34.586 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:34.586 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3791279 00:16:34.586 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:16:34.586 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:16:34.586 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3791279' 00:16:34.586 killing process with pid 3791279 00:16:34.586 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@971 -- # kill 3791279 00:16:34.586 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@976 -- # wait 3791279 00:16:35.156 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:35.156 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:16:35.156 00:16:35.156 real 0m12.862s 00:16:35.156 user 0m28.297s 00:16:35.156 sys 0m6.241s 00:16:35.156 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:35.156 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:35.156 ************************************ 00:16:35.156 END TEST nvmf_shutdown_tc1 00:16:35.156 ************************************ 00:16:35.156 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:16:35.156 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:35.156 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:35.156 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:16:35.156 ************************************ 00:16:35.156 START TEST nvmf_shutdown_tc2 00:16:35.156 ************************************ 00:16:35.156 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc2 00:16:35.156 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:16:35.156 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:16:35.156 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:16:35.156 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:35.156 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:35.156 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:35.156 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:35.156 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.156 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:35.156 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.156 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:35.156 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:35.156 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:16:35.156 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:16:35.156 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:35.156 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:16:35.156 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:35.156 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:35.156 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:35.156 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:35.156 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:35.156 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:16:35.156 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:35.156 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:16:35.156 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:16:35.156 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:16:35.156 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:16:35.156 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:16:35.156 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:16:35.156 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:16:35.157 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:16:35.157 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:16:35.157 Found net devices under 0000:d9:00.0: mlx_0_0 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:16:35.157 Found net devices under 0000:d9:00.1: mlx_0_1 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # rdma_device_init 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # uname 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe ib_core 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:16:35.157 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:16:35.418 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:35.418 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:16:35.418 altname enp217s0f0np0 00:16:35.418 altname ens818f0np0 00:16:35.418 inet 192.168.100.8/24 scope global mlx_0_0 00:16:35.418 valid_lft forever preferred_lft forever 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:16:35.418 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:35.418 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:16:35.418 altname enp217s0f1np1 00:16:35.418 altname ens818f1np1 00:16:35.418 inet 192.168.100.9/24 scope global mlx_0_1 00:16:35.418 valid_lft forever preferred_lft forever 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:16:35.418 192.168.100.9' 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:16:35.418 192.168.100.9' 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # head -n 1 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:16:35.418 192.168.100.9' 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # tail -n +2 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # head -n 1 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:16:35.418 10:45:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:16:35.418 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:16:35.418 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:35.419 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:35.419 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:16:35.419 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3792674 00:16:35.419 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3792674 00:16:35.419 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:35.419 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3792674 ']' 00:16:35.419 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.419 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:35.419 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.419 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:35.419 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:16:35.419 [2024-11-07 10:45:03.063264] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:16:35.419 [2024-11-07 10:45:03.063314] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:35.678 [2024-11-07 10:45:03.139234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:35.678 [2024-11-07 10:45:03.179837] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:35.678 [2024-11-07 10:45:03.179875] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:35.678 [2024-11-07 10:45:03.179885] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:35.678 [2024-11-07 10:45:03.179895] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:35.678 [2024-11-07 10:45:03.179902] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:35.678 [2024-11-07 10:45:03.181567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:35.678 [2024-11-07 10:45:03.181651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:35.678 [2024-11-07 10:45:03.181762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:35.678 [2024-11-07 10:45:03.181763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:16:35.678 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:35.678 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:16:35.678 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:35.678 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:35.678 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:16:35.678 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:35.678 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:35.678 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.678 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:16:35.937 [2024-11-07 10:45:03.355203] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x22740f0/0x22785e0) succeed. 00:16:35.937 [2024-11-07 10:45:03.364379] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2275780/0x22b9c80) succeed. 00:16:35.937 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.937 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:16:35.937 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:16:35.937 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:35.937 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:16:35.937 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:35.937 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:35.937 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:16:35.938 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:35.938 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:16:35.938 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:35.938 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:16:35.938 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:35.938 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:16:35.938 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:35.938 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:16:35.938 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:35.938 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:16:35.938 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:35.938 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:16:35.938 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:35.938 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:16:35.938 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:35.938 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:16:35.938 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:35.938 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:16:35.938 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:16:35.938 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.938 10:45:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:16:35.938 Malloc1 00:16:35.938 [2024-11-07 10:45:03.602402] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:36.197 Malloc2 00:16:36.197 Malloc3 00:16:36.197 Malloc4 00:16:36.197 Malloc5 00:16:36.197 Malloc6 00:16:36.197 Malloc7 00:16:36.457 Malloc8 00:16:36.457 Malloc9 00:16:36.457 Malloc10 00:16:36.457 10:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.457 10:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:16:36.457 10:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:36.457 10:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:16:36.457 10:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3793032 00:16:36.457 10:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3793032 /var/tmp/bdevperf.sock 00:16:36.457 10:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3793032 ']' 00:16:36.457 10:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:36.457 10:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:36.457 10:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:36.457 10:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:16:36.457 10:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:36.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:36.457 10:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:36.457 10:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:16:36.457 10:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:16:36.457 10:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:16:36.457 10:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:36.457 10:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:36.457 { 00:16:36.457 "params": { 00:16:36.457 "name": "Nvme$subsystem", 00:16:36.457 "trtype": "$TEST_TRANSPORT", 00:16:36.457 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:36.457 "adrfam": "ipv4", 00:16:36.457 "trsvcid": "$NVMF_PORT", 00:16:36.457 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:36.457 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:36.457 "hdgst": ${hdgst:-false}, 00:16:36.457 "ddgst": ${ddgst:-false} 00:16:36.457 }, 00:16:36.457 "method": "bdev_nvme_attach_controller" 00:16:36.457 } 00:16:36.457 EOF 00:16:36.457 )") 00:16:36.457 10:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:16:36.457 10:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:36.457 10:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:36.457 { 00:16:36.457 "params": { 00:16:36.457 "name": "Nvme$subsystem", 00:16:36.457 "trtype": "$TEST_TRANSPORT", 00:16:36.457 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:36.457 "adrfam": "ipv4", 00:16:36.457 "trsvcid": "$NVMF_PORT", 00:16:36.457 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:36.457 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:36.457 "hdgst": ${hdgst:-false}, 00:16:36.457 "ddgst": ${ddgst:-false} 00:16:36.457 }, 00:16:36.457 "method": "bdev_nvme_attach_controller" 00:16:36.457 } 00:16:36.457 EOF 00:16:36.457 )") 00:16:36.457 10:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:16:36.457 10:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:36.457 10:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:36.457 { 00:16:36.457 "params": { 00:16:36.457 "name": "Nvme$subsystem", 00:16:36.457 "trtype": "$TEST_TRANSPORT", 00:16:36.457 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:36.457 "adrfam": "ipv4", 00:16:36.457 "trsvcid": "$NVMF_PORT", 00:16:36.457 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:36.457 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:36.457 "hdgst": ${hdgst:-false}, 00:16:36.457 "ddgst": ${ddgst:-false} 00:16:36.457 }, 00:16:36.457 "method": "bdev_nvme_attach_controller" 00:16:36.457 } 00:16:36.457 EOF 00:16:36.457 )") 00:16:36.457 10:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:16:36.457 10:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:36.457 10:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:36.457 { 00:16:36.457 "params": { 00:16:36.457 "name": "Nvme$subsystem", 00:16:36.457 "trtype": "$TEST_TRANSPORT", 00:16:36.457 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:36.457 "adrfam": "ipv4", 00:16:36.457 "trsvcid": "$NVMF_PORT", 00:16:36.457 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:36.457 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:36.457 "hdgst": ${hdgst:-false}, 00:16:36.457 "ddgst": ${ddgst:-false} 00:16:36.457 }, 00:16:36.457 "method": "bdev_nvme_attach_controller" 00:16:36.457 } 00:16:36.457 EOF 00:16:36.457 )") 00:16:36.457 10:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:16:36.457 10:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:36.457 10:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:36.457 { 00:16:36.457 "params": { 00:16:36.457 "name": "Nvme$subsystem", 00:16:36.457 "trtype": "$TEST_TRANSPORT", 00:16:36.457 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:36.457 "adrfam": "ipv4", 00:16:36.457 "trsvcid": "$NVMF_PORT", 00:16:36.457 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:36.457 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:36.457 "hdgst": ${hdgst:-false}, 00:16:36.457 "ddgst": ${ddgst:-false} 00:16:36.457 }, 00:16:36.457 "method": "bdev_nvme_attach_controller" 00:16:36.457 } 00:16:36.457 EOF 00:16:36.457 )") 00:16:36.457 10:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:16:36.457 10:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:36.458 10:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:36.458 { 00:16:36.458 "params": { 00:16:36.458 "name": "Nvme$subsystem", 00:16:36.458 "trtype": "$TEST_TRANSPORT", 00:16:36.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:36.458 "adrfam": "ipv4", 00:16:36.458 "trsvcid": "$NVMF_PORT", 00:16:36.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:36.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:36.458 "hdgst": ${hdgst:-false}, 00:16:36.458 "ddgst": ${ddgst:-false} 00:16:36.458 }, 00:16:36.458 "method": "bdev_nvme_attach_controller" 00:16:36.458 } 00:16:36.458 EOF 00:16:36.458 )") 00:16:36.458 [2024-11-07 10:45:04.098872] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:16:36.458 [2024-11-07 10:45:04.098923] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3793032 ] 00:16:36.458 10:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:16:36.458 10:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:36.458 10:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:36.458 { 00:16:36.458 "params": { 00:16:36.458 "name": "Nvme$subsystem", 00:16:36.458 "trtype": "$TEST_TRANSPORT", 00:16:36.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:36.458 "adrfam": "ipv4", 00:16:36.458 "trsvcid": "$NVMF_PORT", 00:16:36.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:36.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:36.458 "hdgst": ${hdgst:-false}, 00:16:36.458 "ddgst": ${ddgst:-false} 00:16:36.458 }, 00:16:36.458 "method": "bdev_nvme_attach_controller" 00:16:36.458 } 00:16:36.458 EOF 00:16:36.458 )") 00:16:36.458 10:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:16:36.458 10:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:36.458 10:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:36.458 { 00:16:36.458 "params": { 00:16:36.458 "name": "Nvme$subsystem", 00:16:36.458 "trtype": "$TEST_TRANSPORT", 00:16:36.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:36.458 "adrfam": "ipv4", 00:16:36.458 "trsvcid": "$NVMF_PORT", 00:16:36.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:36.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:36.458 "hdgst": ${hdgst:-false}, 00:16:36.458 "ddgst": ${ddgst:-false} 00:16:36.458 }, 00:16:36.458 "method": "bdev_nvme_attach_controller" 00:16:36.458 } 00:16:36.458 EOF 00:16:36.458 )") 00:16:36.458 10:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:16:36.458 10:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:36.458 10:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:36.458 { 00:16:36.458 "params": { 00:16:36.458 "name": "Nvme$subsystem", 00:16:36.458 "trtype": "$TEST_TRANSPORT", 00:16:36.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:36.458 "adrfam": "ipv4", 00:16:36.458 "trsvcid": "$NVMF_PORT", 00:16:36.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:36.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:36.458 "hdgst": ${hdgst:-false}, 00:16:36.458 "ddgst": ${ddgst:-false} 00:16:36.458 }, 00:16:36.458 "method": "bdev_nvme_attach_controller" 00:16:36.458 } 00:16:36.458 EOF 00:16:36.458 )") 00:16:36.458 10:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:16:36.717 10:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:36.717 10:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:36.717 { 00:16:36.717 "params": { 00:16:36.717 "name": "Nvme$subsystem", 00:16:36.717 "trtype": "$TEST_TRANSPORT", 00:16:36.717 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:36.717 "adrfam": "ipv4", 00:16:36.717 "trsvcid": "$NVMF_PORT", 00:16:36.717 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:36.717 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:36.717 "hdgst": ${hdgst:-false}, 00:16:36.717 "ddgst": ${ddgst:-false} 00:16:36.717 }, 00:16:36.717 "method": "bdev_nvme_attach_controller" 00:16:36.717 } 00:16:36.717 EOF 00:16:36.717 )") 00:16:36.717 10:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:16:36.717 10:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:16:36.717 10:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:16:36.717 10:45:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:16:36.717 "params": { 00:16:36.717 "name": "Nvme1", 00:16:36.717 "trtype": "rdma", 00:16:36.717 "traddr": "192.168.100.8", 00:16:36.717 "adrfam": "ipv4", 00:16:36.717 "trsvcid": "4420", 00:16:36.717 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:36.717 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:36.717 "hdgst": false, 00:16:36.717 "ddgst": false 00:16:36.717 }, 00:16:36.717 "method": "bdev_nvme_attach_controller" 00:16:36.717 },{ 00:16:36.717 "params": { 00:16:36.717 "name": "Nvme2", 00:16:36.717 "trtype": "rdma", 00:16:36.717 "traddr": "192.168.100.8", 00:16:36.717 "adrfam": "ipv4", 00:16:36.717 "trsvcid": "4420", 00:16:36.717 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:36.717 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:36.717 "hdgst": false, 00:16:36.717 "ddgst": false 00:16:36.717 }, 00:16:36.717 "method": "bdev_nvme_attach_controller" 00:16:36.717 },{ 00:16:36.717 "params": { 00:16:36.717 "name": "Nvme3", 00:16:36.717 "trtype": "rdma", 00:16:36.717 "traddr": "192.168.100.8", 00:16:36.717 "adrfam": "ipv4", 00:16:36.717 "trsvcid": "4420", 00:16:36.717 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:16:36.717 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:16:36.717 "hdgst": false, 00:16:36.717 "ddgst": false 00:16:36.717 }, 00:16:36.717 "method": "bdev_nvme_attach_controller" 00:16:36.717 },{ 00:16:36.717 "params": { 00:16:36.717 "name": "Nvme4", 00:16:36.717 "trtype": "rdma", 00:16:36.717 "traddr": "192.168.100.8", 00:16:36.717 "adrfam": "ipv4", 00:16:36.717 "trsvcid": "4420", 00:16:36.717 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:16:36.717 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:16:36.717 "hdgst": false, 00:16:36.717 "ddgst": false 00:16:36.717 }, 00:16:36.717 "method": "bdev_nvme_attach_controller" 00:16:36.717 },{ 00:16:36.717 "params": { 00:16:36.717 "name": "Nvme5", 00:16:36.717 "trtype": "rdma", 00:16:36.717 "traddr": "192.168.100.8", 00:16:36.717 "adrfam": "ipv4", 00:16:36.717 "trsvcid": "4420", 00:16:36.717 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:16:36.717 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:16:36.717 "hdgst": false, 00:16:36.717 "ddgst": false 00:16:36.717 }, 00:16:36.718 "method": "bdev_nvme_attach_controller" 00:16:36.718 },{ 00:16:36.718 "params": { 00:16:36.718 "name": "Nvme6", 00:16:36.718 "trtype": "rdma", 00:16:36.718 "traddr": "192.168.100.8", 00:16:36.718 "adrfam": "ipv4", 00:16:36.718 "trsvcid": "4420", 00:16:36.718 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:16:36.718 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:16:36.718 "hdgst": false, 00:16:36.718 "ddgst": false 00:16:36.718 }, 00:16:36.718 "method": "bdev_nvme_attach_controller" 00:16:36.718 },{ 00:16:36.718 "params": { 00:16:36.718 "name": "Nvme7", 00:16:36.718 "trtype": "rdma", 00:16:36.718 "traddr": "192.168.100.8", 00:16:36.718 "adrfam": "ipv4", 00:16:36.718 "trsvcid": "4420", 00:16:36.718 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:16:36.718 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:16:36.718 "hdgst": false, 00:16:36.718 "ddgst": false 00:16:36.718 }, 00:16:36.718 "method": "bdev_nvme_attach_controller" 00:16:36.718 },{ 00:16:36.718 "params": { 00:16:36.718 "name": "Nvme8", 00:16:36.718 "trtype": "rdma", 00:16:36.718 "traddr": "192.168.100.8", 00:16:36.718 "adrfam": "ipv4", 00:16:36.718 "trsvcid": "4420", 00:16:36.718 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:16:36.718 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:16:36.718 "hdgst": false, 00:16:36.718 "ddgst": false 00:16:36.718 }, 00:16:36.718 "method": "bdev_nvme_attach_controller" 00:16:36.718 },{ 00:16:36.718 "params": { 00:16:36.718 "name": "Nvme9", 00:16:36.718 "trtype": "rdma", 00:16:36.718 "traddr": "192.168.100.8", 00:16:36.718 "adrfam": "ipv4", 00:16:36.718 "trsvcid": "4420", 00:16:36.718 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:16:36.718 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:16:36.718 "hdgst": false, 00:16:36.718 "ddgst": false 00:16:36.718 }, 00:16:36.718 "method": "bdev_nvme_attach_controller" 00:16:36.718 },{ 00:16:36.718 "params": { 00:16:36.718 "name": "Nvme10", 00:16:36.718 "trtype": "rdma", 00:16:36.718 "traddr": "192.168.100.8", 00:16:36.718 "adrfam": "ipv4", 00:16:36.718 "trsvcid": "4420", 00:16:36.718 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:16:36.718 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:16:36.718 "hdgst": false, 00:16:36.718 "ddgst": false 00:16:36.718 }, 00:16:36.718 "method": "bdev_nvme_attach_controller" 00:16:36.718 }' 00:16:36.718 [2024-11-07 10:45:04.177698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.718 [2024-11-07 10:45:04.216963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.656 Running I/O for 10 seconds... 00:16:37.656 10:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:37.656 10:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:16:37.656 10:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:37.656 10:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.656 10:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:16:37.656 10:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.656 10:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:16:37.656 10:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:37.656 10:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:16:37.656 10:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:16:37.656 10:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:16:37.656 10:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:16:37.656 10:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:16:37.656 10:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:16:37.656 10:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:16:37.656 10:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.656 10:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:16:37.915 10:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.915 10:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:16:37.915 10:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:16:37.915 10:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:16:38.174 10:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:16:38.174 10:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:16:38.174 10:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:16:38.174 10:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:16:38.174 10:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.174 10:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:16:38.174 10:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.174 10:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=152 00:16:38.174 10:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 152 -ge 100 ']' 00:16:38.174 10:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:16:38.174 10:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:16:38.174 10:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:16:38.174 10:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3793032 00:16:38.174 10:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 3793032 ']' 00:16:38.174 10:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 3793032 00:16:38.174 10:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:16:38.175 10:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:38.175 10:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3793032 00:16:38.175 10:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:38.175 10:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:38.175 10:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3793032' 00:16:38.175 killing process with pid 3793032 00:16:38.175 10:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 3793032 00:16:38.175 10:45:05 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 3793032 00:16:38.434 Received shutdown signal, test time was about 0.805636 seconds 00:16:38.434 00:16:38.434 Latency(us) 00:16:38.434 [2024-11-07T09:45:06.105Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:38.434 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:38.434 Verification LBA range: start 0x0 length 0x400 00:16:38.434 Nvme1n1 : 0.79 350.45 21.90 0.00 0.00 179046.72 7864.32 206359.76 00:16:38.434 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:38.434 Verification LBA range: start 0x0 length 0x400 00:16:38.434 Nvme2n1 : 0.79 362.56 22.66 0.00 0.00 169780.80 8021.61 197132.29 00:16:38.434 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:38.434 Verification LBA range: start 0x0 length 0x400 00:16:38.434 Nvme3n1 : 0.79 360.77 22.55 0.00 0.00 167097.05 3643.80 188743.68 00:16:38.434 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:38.434 Verification LBA range: start 0x0 length 0x400 00:16:38.434 Nvme4n1 : 0.79 402.99 25.19 0.00 0.00 146682.47 5321.52 130023.42 00:16:38.434 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:38.434 Verification LBA range: start 0x0 length 0x400 00:16:38.434 Nvme5n1 : 0.80 395.96 24.75 0.00 0.00 146526.76 8860.47 167772.16 00:16:38.434 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:38.434 Verification LBA range: start 0x0 length 0x400 00:16:38.434 Nvme6n1 : 0.80 401.63 25.10 0.00 0.00 140969.74 9542.04 110729.63 00:16:38.434 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:38.434 Verification LBA range: start 0x0 length 0x400 00:16:38.434 Nvme7n1 : 0.80 400.94 25.06 0.00 0.00 138468.97 10013.90 106535.32 00:16:38.434 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:38.434 Verification LBA range: start 0x0 length 0x400 00:16:38.434 Nvme8n1 : 0.80 400.23 25.01 0.00 0.00 135752.91 10590.62 100663.30 00:16:38.434 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:38.434 Verification LBA range: start 0x0 length 0x400 00:16:38.434 Nvme9n1 : 0.80 399.37 24.96 0.00 0.00 133639.37 11481.91 89758.11 00:16:38.434 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:38.434 Verification LBA range: start 0x0 length 0x400 00:16:38.434 Nvme10n1 : 0.81 318.01 19.88 0.00 0.00 163454.18 3040.87 208876.34 00:16:38.434 [2024-11-07T09:45:06.105Z] =================================================================================================================== 00:16:38.434 [2024-11-07T09:45:06.105Z] Total : 3792.91 237.06 0.00 0.00 151167.59 3040.87 208876.34 00:16:38.693 10:45:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:16:39.631 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3792674 00:16:39.631 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:16:39.631 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:16:39.631 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:39.631 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:39.631 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:16:39.631 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:39.631 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:16:39.631 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:16:39.631 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:16:39.631 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:16:39.631 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:39.631 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:16:39.631 rmmod nvme_rdma 00:16:39.631 rmmod nvme_fabrics 00:16:39.631 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:39.631 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:16:39.631 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:16:39.631 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3792674 ']' 00:16:39.631 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3792674 00:16:39.631 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 3792674 ']' 00:16:39.631 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 3792674 00:16:39.631 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:16:39.631 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:39.631 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3792674 00:16:39.631 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:16:39.631 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:16:39.631 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3792674' 00:16:39.631 killing process with pid 3792674 00:16:39.631 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 3792674 00:16:39.631 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 3792674 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:16:40.200 00:16:40.200 real 0m4.971s 00:16:40.200 user 0m19.805s 00:16:40.200 sys 0m1.133s 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:16:40.200 ************************************ 00:16:40.200 END TEST nvmf_shutdown_tc2 00:16:40.200 ************************************ 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:16:40.200 ************************************ 00:16:40.200 START TEST nvmf_shutdown_tc3 00:16:40.200 ************************************ 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc3 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:16:40.200 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:40.200 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:40.201 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:16:40.201 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:16:40.201 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:16:40.201 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:16:40.201 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:40.201 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:40.201 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:40.201 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:40.201 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:40.201 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:16:40.201 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:40.201 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:40.201 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:16:40.201 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:40.201 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:40.201 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:16:40.201 Found net devices under 0000:d9:00.0: mlx_0_0 00:16:40.201 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:40.201 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:40.201 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:40.201 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:16:40.201 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:40.201 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:40.201 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:16:40.201 Found net devices under 0000:d9:00.1: mlx_0_1 00:16:40.201 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:40.201 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:40.201 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:16:40.201 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:40.201 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:16:40.201 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:16:40.201 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # rdma_device_init 00:16:40.201 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:16:40.201 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # uname 00:16:40.201 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:16:40.201 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:16:40.201 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe ib_core 00:16:40.201 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:16:40.201 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:16:40.201 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:16:40.201 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:16:40.201 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:16:40.201 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:16:40.201 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:40.201 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:16:40.201 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:40.201 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:40.201 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:40.201 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:16:40.465 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:40.465 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:16:40.465 altname enp217s0f0np0 00:16:40.465 altname ens818f0np0 00:16:40.465 inet 192.168.100.8/24 scope global mlx_0_0 00:16:40.465 valid_lft forever preferred_lft forever 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:16:40.465 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:40.465 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:16:40.465 altname enp217s0f1np1 00:16:40.465 altname ens818f1np1 00:16:40.465 inet 192.168.100.9/24 scope global mlx_0_1 00:16:40.465 valid_lft forever preferred_lft forever 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:16:40.465 192.168.100.9' 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:16:40.465 192.168.100.9' 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # head -n 1 00:16:40.465 10:45:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:40.465 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:16:40.465 192.168.100.9' 00:16:40.465 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # tail -n +2 00:16:40.465 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # head -n 1 00:16:40.465 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:40.465 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:16:40.465 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:40.465 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:16:40.465 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:16:40.465 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:16:40.466 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:16:40.466 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:40.466 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:40.466 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:16:40.466 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3794199 00:16:40.466 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3794199 00:16:40.466 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:40.466 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 3794199 ']' 00:16:40.466 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.466 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:40.466 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.466 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:40.466 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:16:40.466 [2024-11-07 10:45:08.091899] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:16:40.466 [2024-11-07 10:45:08.091945] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:40.753 [2024-11-07 10:45:08.172015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:40.753 [2024-11-07 10:45:08.211832] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:40.753 [2024-11-07 10:45:08.211873] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:40.753 [2024-11-07 10:45:08.211882] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:40.753 [2024-11-07 10:45:08.211891] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:40.753 [2024-11-07 10:45:08.211898] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:40.754 [2024-11-07 10:45:08.213736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:40.754 [2024-11-07 10:45:08.213824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:40.754 [2024-11-07 10:45:08.213933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:40.754 [2024-11-07 10:45:08.213935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:16:40.754 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:40.754 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:16:40.754 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:40.754 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:40.754 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:16:40.754 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:40.754 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:40.754 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.754 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:16:40.754 [2024-11-07 10:45:08.379217] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x235f0f0/0x23635e0) succeed. 00:16:40.754 [2024-11-07 10:45:08.388311] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2360780/0x23a4c80) succeed. 00:16:41.021 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.021 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:16:41.021 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:16:41.021 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:41.021 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:16:41.021 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:41.021 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:41.021 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:16:41.021 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:41.021 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:16:41.021 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:41.021 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:16:41.021 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:41.021 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:16:41.021 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:41.021 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:16:41.021 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:41.021 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:16:41.021 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:41.021 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:16:41.021 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:41.021 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:16:41.021 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:41.021 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:16:41.021 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:41.021 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:16:41.021 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:16:41.021 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.021 10:45:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:16:41.021 Malloc1 00:16:41.021 [2024-11-07 10:45:08.629463] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:41.021 Malloc2 00:16:41.280 Malloc3 00:16:41.280 Malloc4 00:16:41.280 Malloc5 00:16:41.280 Malloc6 00:16:41.280 Malloc7 00:16:41.280 Malloc8 00:16:41.540 Malloc9 00:16:41.540 Malloc10 00:16:41.540 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.540 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:16:41.540 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:41.540 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:16:41.540 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3794296 00:16:41.540 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3794296 /var/tmp/bdevperf.sock 00:16:41.541 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 3794296 ']' 00:16:41.541 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:41.541 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:41.541 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:41.541 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:41.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:41.541 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:16:41.541 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:41.541 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:16:41.541 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:16:41.541 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:16:41.541 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:41.541 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:41.541 { 00:16:41.541 "params": { 00:16:41.541 "name": "Nvme$subsystem", 00:16:41.541 "trtype": "$TEST_TRANSPORT", 00:16:41.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:41.541 "adrfam": "ipv4", 00:16:41.541 "trsvcid": "$NVMF_PORT", 00:16:41.541 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:41.541 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:41.541 "hdgst": ${hdgst:-false}, 00:16:41.541 "ddgst": ${ddgst:-false} 00:16:41.541 }, 00:16:41.541 "method": "bdev_nvme_attach_controller" 00:16:41.541 } 00:16:41.541 EOF 00:16:41.541 )") 00:16:41.541 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:16:41.541 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:41.541 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:41.541 { 00:16:41.541 "params": { 00:16:41.541 "name": "Nvme$subsystem", 00:16:41.541 "trtype": "$TEST_TRANSPORT", 00:16:41.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:41.541 "adrfam": "ipv4", 00:16:41.541 "trsvcid": "$NVMF_PORT", 00:16:41.541 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:41.541 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:41.541 "hdgst": ${hdgst:-false}, 00:16:41.541 "ddgst": ${ddgst:-false} 00:16:41.541 }, 00:16:41.541 "method": "bdev_nvme_attach_controller" 00:16:41.541 } 00:16:41.541 EOF 00:16:41.541 )") 00:16:41.541 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:16:41.541 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:41.541 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:41.541 { 00:16:41.541 "params": { 00:16:41.541 "name": "Nvme$subsystem", 00:16:41.541 "trtype": "$TEST_TRANSPORT", 00:16:41.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:41.541 "adrfam": "ipv4", 00:16:41.541 "trsvcid": "$NVMF_PORT", 00:16:41.541 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:41.541 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:41.541 "hdgst": ${hdgst:-false}, 00:16:41.541 "ddgst": ${ddgst:-false} 00:16:41.541 }, 00:16:41.541 "method": "bdev_nvme_attach_controller" 00:16:41.541 } 00:16:41.541 EOF 00:16:41.541 )") 00:16:41.541 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:16:41.541 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:41.541 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:41.541 { 00:16:41.541 "params": { 00:16:41.541 "name": "Nvme$subsystem", 00:16:41.541 "trtype": "$TEST_TRANSPORT", 00:16:41.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:41.541 "adrfam": "ipv4", 00:16:41.541 "trsvcid": "$NVMF_PORT", 00:16:41.541 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:41.541 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:41.541 "hdgst": ${hdgst:-false}, 00:16:41.541 "ddgst": ${ddgst:-false} 00:16:41.541 }, 00:16:41.541 "method": "bdev_nvme_attach_controller" 00:16:41.541 } 00:16:41.541 EOF 00:16:41.541 )") 00:16:41.541 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:16:41.541 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:41.541 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:41.541 { 00:16:41.541 "params": { 00:16:41.541 "name": "Nvme$subsystem", 00:16:41.541 "trtype": "$TEST_TRANSPORT", 00:16:41.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:41.541 "adrfam": "ipv4", 00:16:41.541 "trsvcid": "$NVMF_PORT", 00:16:41.541 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:41.541 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:41.541 "hdgst": ${hdgst:-false}, 00:16:41.541 "ddgst": ${ddgst:-false} 00:16:41.541 }, 00:16:41.541 "method": "bdev_nvme_attach_controller" 00:16:41.541 } 00:16:41.541 EOF 00:16:41.541 )") 00:16:41.541 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:16:41.541 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:41.541 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:41.541 { 00:16:41.541 "params": { 00:16:41.541 "name": "Nvme$subsystem", 00:16:41.541 "trtype": "$TEST_TRANSPORT", 00:16:41.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:41.541 "adrfam": "ipv4", 00:16:41.541 "trsvcid": "$NVMF_PORT", 00:16:41.541 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:41.541 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:41.541 "hdgst": ${hdgst:-false}, 00:16:41.541 "ddgst": ${ddgst:-false} 00:16:41.541 }, 00:16:41.541 "method": "bdev_nvme_attach_controller" 00:16:41.541 } 00:16:41.541 EOF 00:16:41.541 )") 00:16:41.541 [2024-11-07 10:45:09.119875] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:16:41.541 [2024-11-07 10:45:09.119928] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3794296 ] 00:16:41.541 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:16:41.541 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:41.541 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:41.541 { 00:16:41.541 "params": { 00:16:41.541 "name": "Nvme$subsystem", 00:16:41.541 "trtype": "$TEST_TRANSPORT", 00:16:41.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:41.541 "adrfam": "ipv4", 00:16:41.541 "trsvcid": "$NVMF_PORT", 00:16:41.541 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:41.541 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:41.541 "hdgst": ${hdgst:-false}, 00:16:41.541 "ddgst": ${ddgst:-false} 00:16:41.541 }, 00:16:41.541 "method": "bdev_nvme_attach_controller" 00:16:41.541 } 00:16:41.541 EOF 00:16:41.541 )") 00:16:41.541 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:16:41.541 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:41.541 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:41.541 { 00:16:41.541 "params": { 00:16:41.541 "name": "Nvme$subsystem", 00:16:41.541 "trtype": "$TEST_TRANSPORT", 00:16:41.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:41.541 "adrfam": "ipv4", 00:16:41.541 "trsvcid": "$NVMF_PORT", 00:16:41.541 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:41.541 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:41.541 "hdgst": ${hdgst:-false}, 00:16:41.541 "ddgst": ${ddgst:-false} 00:16:41.541 }, 00:16:41.541 "method": "bdev_nvme_attach_controller" 00:16:41.541 } 00:16:41.541 EOF 00:16:41.541 )") 00:16:41.541 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:16:41.542 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:41.542 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:41.542 { 00:16:41.542 "params": { 00:16:41.542 "name": "Nvme$subsystem", 00:16:41.542 "trtype": "$TEST_TRANSPORT", 00:16:41.542 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:41.542 "adrfam": "ipv4", 00:16:41.542 "trsvcid": "$NVMF_PORT", 00:16:41.542 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:41.542 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:41.542 "hdgst": ${hdgst:-false}, 00:16:41.542 "ddgst": ${ddgst:-false} 00:16:41.542 }, 00:16:41.542 "method": "bdev_nvme_attach_controller" 00:16:41.542 } 00:16:41.542 EOF 00:16:41.542 )") 00:16:41.542 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:16:41.542 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:41.542 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:41.542 { 00:16:41.542 "params": { 00:16:41.542 "name": "Nvme$subsystem", 00:16:41.542 "trtype": "$TEST_TRANSPORT", 00:16:41.542 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:41.542 "adrfam": "ipv4", 00:16:41.542 "trsvcid": "$NVMF_PORT", 00:16:41.542 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:41.542 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:41.542 "hdgst": ${hdgst:-false}, 00:16:41.542 "ddgst": ${ddgst:-false} 00:16:41.542 }, 00:16:41.542 "method": "bdev_nvme_attach_controller" 00:16:41.542 } 00:16:41.542 EOF 00:16:41.542 )") 00:16:41.542 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:16:41.542 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:16:41.542 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:16:41.542 10:45:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:16:41.542 "params": { 00:16:41.542 "name": "Nvme1", 00:16:41.542 "trtype": "rdma", 00:16:41.542 "traddr": "192.168.100.8", 00:16:41.542 "adrfam": "ipv4", 00:16:41.542 "trsvcid": "4420", 00:16:41.542 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:41.542 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:41.542 "hdgst": false, 00:16:41.542 "ddgst": false 00:16:41.542 }, 00:16:41.542 "method": "bdev_nvme_attach_controller" 00:16:41.542 },{ 00:16:41.542 "params": { 00:16:41.542 "name": "Nvme2", 00:16:41.542 "trtype": "rdma", 00:16:41.542 "traddr": "192.168.100.8", 00:16:41.542 "adrfam": "ipv4", 00:16:41.542 "trsvcid": "4420", 00:16:41.542 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:41.542 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:41.542 "hdgst": false, 00:16:41.542 "ddgst": false 00:16:41.542 }, 00:16:41.542 "method": "bdev_nvme_attach_controller" 00:16:41.542 },{ 00:16:41.542 "params": { 00:16:41.542 "name": "Nvme3", 00:16:41.542 "trtype": "rdma", 00:16:41.542 "traddr": "192.168.100.8", 00:16:41.542 "adrfam": "ipv4", 00:16:41.542 "trsvcid": "4420", 00:16:41.542 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:16:41.542 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:16:41.542 "hdgst": false, 00:16:41.542 "ddgst": false 00:16:41.542 }, 00:16:41.542 "method": "bdev_nvme_attach_controller" 00:16:41.542 },{ 00:16:41.542 "params": { 00:16:41.542 "name": "Nvme4", 00:16:41.542 "trtype": "rdma", 00:16:41.542 "traddr": "192.168.100.8", 00:16:41.542 "adrfam": "ipv4", 00:16:41.542 "trsvcid": "4420", 00:16:41.542 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:16:41.542 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:16:41.542 "hdgst": false, 00:16:41.542 "ddgst": false 00:16:41.542 }, 00:16:41.542 "method": "bdev_nvme_attach_controller" 00:16:41.542 },{ 00:16:41.542 "params": { 00:16:41.542 "name": "Nvme5", 00:16:41.542 "trtype": "rdma", 00:16:41.542 "traddr": "192.168.100.8", 00:16:41.542 "adrfam": "ipv4", 00:16:41.542 "trsvcid": "4420", 00:16:41.542 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:16:41.542 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:16:41.542 "hdgst": false, 00:16:41.542 "ddgst": false 00:16:41.542 }, 00:16:41.542 "method": "bdev_nvme_attach_controller" 00:16:41.542 },{ 00:16:41.542 "params": { 00:16:41.542 "name": "Nvme6", 00:16:41.542 "trtype": "rdma", 00:16:41.542 "traddr": "192.168.100.8", 00:16:41.542 "adrfam": "ipv4", 00:16:41.542 "trsvcid": "4420", 00:16:41.542 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:16:41.542 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:16:41.542 "hdgst": false, 00:16:41.542 "ddgst": false 00:16:41.542 }, 00:16:41.542 "method": "bdev_nvme_attach_controller" 00:16:41.542 },{ 00:16:41.542 "params": { 00:16:41.542 "name": "Nvme7", 00:16:41.542 "trtype": "rdma", 00:16:41.542 "traddr": "192.168.100.8", 00:16:41.542 "adrfam": "ipv4", 00:16:41.542 "trsvcid": "4420", 00:16:41.542 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:16:41.542 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:16:41.542 "hdgst": false, 00:16:41.542 "ddgst": false 00:16:41.542 }, 00:16:41.542 "method": "bdev_nvme_attach_controller" 00:16:41.542 },{ 00:16:41.542 "params": { 00:16:41.542 "name": "Nvme8", 00:16:41.542 "trtype": "rdma", 00:16:41.542 "traddr": "192.168.100.8", 00:16:41.542 "adrfam": "ipv4", 00:16:41.542 "trsvcid": "4420", 00:16:41.542 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:16:41.542 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:16:41.542 "hdgst": false, 00:16:41.542 "ddgst": false 00:16:41.542 }, 00:16:41.542 "method": "bdev_nvme_attach_controller" 00:16:41.542 },{ 00:16:41.542 "params": { 00:16:41.542 "name": "Nvme9", 00:16:41.542 "trtype": "rdma", 00:16:41.542 "traddr": "192.168.100.8", 00:16:41.542 "adrfam": "ipv4", 00:16:41.542 "trsvcid": "4420", 00:16:41.542 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:16:41.542 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:16:41.542 "hdgst": false, 00:16:41.542 "ddgst": false 00:16:41.542 }, 00:16:41.542 "method": "bdev_nvme_attach_controller" 00:16:41.542 },{ 00:16:41.542 "params": { 00:16:41.542 "name": "Nvme10", 00:16:41.542 "trtype": "rdma", 00:16:41.542 "traddr": "192.168.100.8", 00:16:41.542 "adrfam": "ipv4", 00:16:41.542 "trsvcid": "4420", 00:16:41.542 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:16:41.542 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:16:41.542 "hdgst": false, 00:16:41.542 "ddgst": false 00:16:41.542 }, 00:16:41.542 "method": "bdev_nvme_attach_controller" 00:16:41.542 }' 00:16:41.542 [2024-11-07 10:45:09.199627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.801 [2024-11-07 10:45:09.239680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.737 Running I/O for 10 seconds... 00:16:42.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:42.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:16:42.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:42.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:16:42.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:42.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:16:42.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:42.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:16:42.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:16:42.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:16:42.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:16:42.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:16:42.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:16:42.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:16:42.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:16:42.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:16:42.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:16:42.737 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:16:42.996 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:16:42.996 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:16:42.996 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:16:42.996 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:16:42.996 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.996 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:16:43.255 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.255 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=147 00:16:43.255 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 147 -ge 100 ']' 00:16:43.255 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:16:43.255 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:16:43.255 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:16:43.255 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3794199 00:16:43.255 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 3794199 ']' 00:16:43.255 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 3794199 00:16:43.255 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # uname 00:16:43.255 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:43.255 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3794199 00:16:43.256 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:16:43.256 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:16:43.256 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3794199' 00:16:43.256 killing process with pid 3794199 00:16:43.256 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@971 -- # kill 3794199 00:16:43.256 10:45:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@976 -- # wait 3794199 00:16:43.773 2576.00 IOPS, 161.00 MiB/s [2024-11-07T09:45:11.444Z] 10:45:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:16:44.372 [2024-11-07 10:45:11.938215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100119fd80 len:0x10000 key:0x180800 00:16:44.372 [2024-11-07 10:45:11.938296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.372 [2024-11-07 10:45:11.938351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100118fd00 len:0x10000 key:0x180800 00:16:44.372 [2024-11-07 10:45:11.938386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.372 [2024-11-07 10:45:11.938424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100117fc80 len:0x10000 key:0x180800 00:16:44.372 [2024-11-07 10:45:11.938456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.372 [2024-11-07 10:45:11.938492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100116fc00 len:0x10000 key:0x180800 00:16:44.372 [2024-11-07 10:45:11.938535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.372 [2024-11-07 10:45:11.938581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100115fb80 len:0x10000 key:0x180800 00:16:44.372 [2024-11-07 10:45:11.938612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.372 [2024-11-07 10:45:11.938649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100114fb00 len:0x10000 key:0x180800 00:16:44.372 [2024-11-07 10:45:11.938680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.372 [2024-11-07 10:45:11.938717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100113fa80 len:0x10000 key:0x180800 00:16:44.372 [2024-11-07 10:45:11.938748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.372 [2024-11-07 10:45:11.938775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100112fa00 len:0x10000 key:0x180800 00:16:44.372 [2024-11-07 10:45:11.938784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.372 [2024-11-07 10:45:11.938795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100111f980 len:0x10000 key:0x180800 00:16:44.372 [2024-11-07 10:45:11.938803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.372 [2024-11-07 10:45:11.938813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100110f900 len:0x10000 key:0x180800 00:16:44.372 [2024-11-07 10:45:11.938822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.372 [2024-11-07 10:45:11.938833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010010ff880 len:0x10000 key:0x180800 00:16:44.372 [2024-11-07 10:45:11.938841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.372 [2024-11-07 10:45:11.938852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010010ef800 len:0x10000 key:0x180800 00:16:44.372 [2024-11-07 10:45:11.938860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.372 [2024-11-07 10:45:11.938871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010010df780 len:0x10000 key:0x180800 00:16:44.372 [2024-11-07 10:45:11.938879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.372 [2024-11-07 10:45:11.938890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010010cf700 len:0x10000 key:0x180800 00:16:44.372 [2024-11-07 10:45:11.938898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.372 [2024-11-07 10:45:11.938908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010010bf680 len:0x10000 key:0x180800 00:16:44.372 [2024-11-07 10:45:11.938917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.372 [2024-11-07 10:45:11.938927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010010af600 len:0x10000 key:0x180800 00:16:44.372 [2024-11-07 10:45:11.938940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.372 [2024-11-07 10:45:11.938952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100109f580 len:0x10000 key:0x180800 00:16:44.372 [2024-11-07 10:45:11.938960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.372 [2024-11-07 10:45:11.938971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100108f500 len:0x10000 key:0x180800 00:16:44.372 [2024-11-07 10:45:11.938979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.372 [2024-11-07 10:45:11.938990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100107f480 len:0x10000 key:0x180800 00:16:44.372 [2024-11-07 10:45:11.938999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.372 [2024-11-07 10:45:11.939009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100106f400 len:0x10000 key:0x180800 00:16:44.372 [2024-11-07 10:45:11.939017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.372 [2024-11-07 10:45:11.939028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100105f380 len:0x10000 key:0x180800 00:16:44.372 [2024-11-07 10:45:11.939036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.372 [2024-11-07 10:45:11.939047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100104f300 len:0x10000 key:0x180800 00:16:44.372 [2024-11-07 10:45:11.939056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.372 [2024-11-07 10:45:11.939066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100103f280 len:0x10000 key:0x180800 00:16:44.372 [2024-11-07 10:45:11.939075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.372 [2024-11-07 10:45:11.939085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100102f200 len:0x10000 key:0x180800 00:16:44.372 [2024-11-07 10:45:11.939094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.372 [2024-11-07 10:45:11.939104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100101f180 len:0x10000 key:0x180800 00:16:44.372 [2024-11-07 10:45:11.939113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.372 [2024-11-07 10:45:11.939123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100100f100 len:0x10000 key:0x180800 00:16:44.372 [2024-11-07 10:45:11.939131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.372 [2024-11-07 10:45:11.939142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010013f0000 len:0x10000 key:0x183100 00:16:44.372 [2024-11-07 10:45:11.939152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.372 [2024-11-07 10:45:11.939162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010013dff80 len:0x10000 key:0x183100 00:16:44.372 [2024-11-07 10:45:11.939171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.372 [2024-11-07 10:45:11.939181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010013cff00 len:0x10000 key:0x183100 00:16:44.372 [2024-11-07 10:45:11.939189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.372 [2024-11-07 10:45:11.939200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010013bfe80 len:0x10000 key:0x183100 00:16:44.372 [2024-11-07 10:45:11.939209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.372 [2024-11-07 10:45:11.939219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000fcff00 len:0x10000 key:0x184a00 00:16:44.373 [2024-11-07 10:45:11.939228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.373 [2024-11-07 10:45:11.939238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008ea2000 len:0x10000 key:0x183900 00:16:44.373 [2024-11-07 10:45:11.939247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.373 [2024-11-07 10:45:11.939261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008ec3000 len:0x10000 key:0x183900 00:16:44.373 [2024-11-07 10:45:11.939270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.373 [2024-11-07 10:45:11.939281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008ee4000 len:0x10000 key:0x183900 00:16:44.373 [2024-11-07 10:45:11.939290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.373 [2024-11-07 10:45:11.939300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008f05000 len:0x10000 key:0x183900 00:16:44.373 [2024-11-07 10:45:11.939308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.373 [2024-11-07 10:45:11.939319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008f26000 len:0x10000 key:0x183900 00:16:44.373 [2024-11-07 10:45:11.939329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.373 [2024-11-07 10:45:11.939340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008f47000 len:0x10000 key:0x183900 00:16:44.373 [2024-11-07 10:45:11.939348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.373 [2024-11-07 10:45:11.939359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008f68000 len:0x10000 key:0x183900 00:16:44.373 [2024-11-07 10:45:11.939368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.373 [2024-11-07 10:45:11.939379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008f89000 len:0x10000 key:0x183900 00:16:44.373 [2024-11-07 10:45:11.939388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.373 [2024-11-07 10:45:11.939399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008faa000 len:0x10000 key:0x183900 00:16:44.373 [2024-11-07 10:45:11.939408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.373 [2024-11-07 10:45:11.939418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008fcb000 len:0x10000 key:0x183900 00:16:44.373 [2024-11-07 10:45:11.939426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.373 [2024-11-07 10:45:11.939437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008fec000 len:0x10000 key:0x183900 00:16:44.373 [2024-11-07 10:45:11.939445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.373 [2024-11-07 10:45:11.939456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000900d000 len:0x10000 key:0x183900 00:16:44.373 [2024-11-07 10:45:11.939464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.373 [2024-11-07 10:45:11.939475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000902e000 len:0x10000 key:0x183900 00:16:44.373 [2024-11-07 10:45:11.939483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.373 [2024-11-07 10:45:11.939494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000904f000 len:0x10000 key:0x183900 00:16:44.373 [2024-11-07 10:45:11.939503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.373 [2024-11-07 10:45:11.939517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e731000 len:0x10000 key:0x183900 00:16:44.373 [2024-11-07 10:45:11.939526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.373 [2024-11-07 10:45:11.939536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a0ae000 len:0x10000 key:0x183900 00:16:44.373 [2024-11-07 10:45:11.939544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.373 [2024-11-07 10:45:11.939554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e332000 len:0x10000 key:0x183900 00:16:44.373 [2024-11-07 10:45:11.939563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.373 [2024-11-07 10:45:11.939574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e311000 len:0x10000 key:0x183900 00:16:44.373 [2024-11-07 10:45:11.939583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.373 [2024-11-07 10:45:11.939595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e2f0000 len:0x10000 key:0x183900 00:16:44.373 [2024-11-07 10:45:11.939603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.373 [2024-11-07 10:45:11.939614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c694000 len:0x10000 key:0x183900 00:16:44.373 [2024-11-07 10:45:11.939622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.373 [2024-11-07 10:45:11.939633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c673000 len:0x10000 key:0x183900 00:16:44.373 [2024-11-07 10:45:11.939642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.373 [2024-11-07 10:45:11.939652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c652000 len:0x10000 key:0x183900 00:16:44.373 [2024-11-07 10:45:11.939660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.373 [2024-11-07 10:45:11.939671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c631000 len:0x10000 key:0x183900 00:16:44.373 [2024-11-07 10:45:11.939679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.373 [2024-11-07 10:45:11.939690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c610000 len:0x10000 key:0x183900 00:16:44.373 [2024-11-07 10:45:11.939699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.373 [2024-11-07 10:45:11.939709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f09a000 len:0x10000 key:0x183900 00:16:44.373 [2024-11-07 10:45:11.939717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.373 [2024-11-07 10:45:11.939727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f079000 len:0x10000 key:0x183900 00:16:44.373 [2024-11-07 10:45:11.939736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.373 [2024-11-07 10:45:11.939746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f058000 len:0x10000 key:0x183900 00:16:44.373 [2024-11-07 10:45:11.939755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.373 [2024-11-07 10:45:11.939765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f037000 len:0x10000 key:0x183900 00:16:44.373 [2024-11-07 10:45:11.939774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.373 [2024-11-07 10:45:11.939784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f016000 len:0x10000 key:0x183900 00:16:44.373 [2024-11-07 10:45:11.939792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.373 [2024-11-07 10:45:11.939802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000eff5000 len:0x10000 key:0x183900 00:16:44.373 [2024-11-07 10:45:11.939813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.373 [2024-11-07 10:45:11.939823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000efd4000 len:0x10000 key:0x183900 00:16:44.373 [2024-11-07 10:45:11.939831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.373 [2024-11-07 10:45:11.939841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000efb3000 len:0x10000 key:0x183900 00:16:44.373 [2024-11-07 10:45:11.939850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.373 [2024-11-07 10:45:11.939860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e710000 len:0x10000 key:0x183900 00:16:44.373 [2024-11-07 10:45:11.939868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.373 [2024-11-07 10:45:11.942019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100158fd00 len:0x10000 key:0x183000 00:16:44.373 [2024-11-07 10:45:11.942035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.373 [2024-11-07 10:45:11.942050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100157fc80 len:0x10000 key:0x183000 00:16:44.373 [2024-11-07 10:45:11.942059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.373 [2024-11-07 10:45:11.942069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100156fc00 len:0x10000 key:0x183000 00:16:44.374 [2024-11-07 10:45:11.942078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.374 [2024-11-07 10:45:11.942089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100155fb80 len:0x10000 key:0x183000 00:16:44.374 [2024-11-07 10:45:11.942098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.374 [2024-11-07 10:45:11.942109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100154fb00 len:0x10000 key:0x183000 00:16:44.374 [2024-11-07 10:45:11.942117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.374 [2024-11-07 10:45:11.942128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100153fa80 len:0x10000 key:0x183000 00:16:44.374 [2024-11-07 10:45:11.942137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.374 [2024-11-07 10:45:11.942147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100152fa00 len:0x10000 key:0x183000 00:16:44.374 [2024-11-07 10:45:11.942156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.374 [2024-11-07 10:45:11.942166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100151f980 len:0x10000 key:0x183000 00:16:44.374 [2024-11-07 10:45:11.942177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.374 [2024-11-07 10:45:11.942187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100150f900 len:0x10000 key:0x183000 00:16:44.374 [2024-11-07 10:45:11.942196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.374 [2024-11-07 10:45:11.942206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010014ff880 len:0x10000 key:0x183000 00:16:44.374 [2024-11-07 10:45:11.942215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.374 [2024-11-07 10:45:11.942225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010014ef800 len:0x10000 key:0x183000 00:16:44.374 [2024-11-07 10:45:11.942235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.374 [2024-11-07 10:45:11.942245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010014df780 len:0x10000 key:0x183000 00:16:44.374 [2024-11-07 10:45:11.942253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.374 [2024-11-07 10:45:11.942263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010014cf700 len:0x10000 key:0x183000 00:16:44.374 [2024-11-07 10:45:11.942272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.374 [2024-11-07 10:45:11.942283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010014bf680 len:0x10000 key:0x183000 00:16:44.374 [2024-11-07 10:45:11.942292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.374 [2024-11-07 10:45:11.942302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010014af600 len:0x10000 key:0x183000 00:16:44.374 [2024-11-07 10:45:11.942311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.374 [2024-11-07 10:45:11.942321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100149f580 len:0x10000 key:0x183000 00:16:44.374 [2024-11-07 10:45:11.942330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.374 [2024-11-07 10:45:11.942342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100148f500 len:0x10000 key:0x183000 00:16:44.374 [2024-11-07 10:45:11.942351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.374 [2024-11-07 10:45:11.942362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100147f480 len:0x10000 key:0x183000 00:16:44.374 [2024-11-07 10:45:11.942370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.374 [2024-11-07 10:45:11.942380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100146f400 len:0x10000 key:0x183000 00:16:44.374 [2024-11-07 10:45:11.942389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.374 [2024-11-07 10:45:11.942401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100145f380 len:0x10000 key:0x183000 00:16:44.374 [2024-11-07 10:45:11.942410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.374 [2024-11-07 10:45:11.942420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100144f300 len:0x10000 key:0x183000 00:16:44.374 [2024-11-07 10:45:11.942429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.374 [2024-11-07 10:45:11.942440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100143f280 len:0x10000 key:0x183000 00:16:44.374 [2024-11-07 10:45:11.942448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.374 [2024-11-07 10:45:11.942459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100142f200 len:0x10000 key:0x183000 00:16:44.374 [2024-11-07 10:45:11.942468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.374 [2024-11-07 10:45:11.942478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100141f180 len:0x10000 key:0x183000 00:16:44.374 [2024-11-07 10:45:11.942487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.374 [2024-11-07 10:45:11.942497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100140f100 len:0x10000 key:0x183000 00:16:44.374 [2024-11-07 10:45:11.942506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.374 [2024-11-07 10:45:11.942529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010017f0000 len:0x10000 key:0x182100 00:16:44.374 [2024-11-07 10:45:11.942538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.374 [2024-11-07 10:45:11.942549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010017dff80 len:0x10000 key:0x182100 00:16:44.374 [2024-11-07 10:45:11.942558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.374 [2024-11-07 10:45:11.942568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010017cff00 len:0x10000 key:0x182100 00:16:44.374 [2024-11-07 10:45:11.942577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.374 [2024-11-07 10:45:11.942587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010017bfe80 len:0x10000 key:0x182100 00:16:44.374 [2024-11-07 10:45:11.942597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.374 [2024-11-07 10:45:11.942608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010017afe00 len:0x10000 key:0x182100 00:16:44.374 [2024-11-07 10:45:11.942616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.374 [2024-11-07 10:45:11.942628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100179fd80 len:0x10000 key:0x182100 00:16:44.374 [2024-11-07 10:45:11.942637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.374 [2024-11-07 10:45:11.942647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010013afe00 len:0x10000 key:0x183100 00:16:44.374 [2024-11-07 10:45:11.942656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.374 [2024-11-07 10:45:11.942668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b73d000 len:0x10000 key:0x183900 00:16:44.374 [2024-11-07 10:45:11.942676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.374 [2024-11-07 10:45:11.942686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b75e000 len:0x10000 key:0x183900 00:16:44.374 [2024-11-07 10:45:11.942695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.374 [2024-11-07 10:45:11.942705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b77f000 len:0x10000 key:0x183900 00:16:44.374 [2024-11-07 10:45:11.942714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.374 [2024-11-07 10:45:11.942725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b677000 len:0x10000 key:0x183900 00:16:44.374 [2024-11-07 10:45:11.942733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.374 [2024-11-07 10:45:11.942744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e794000 len:0x10000 key:0x183900 00:16:44.374 [2024-11-07 10:45:11.942752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.374 [2024-11-07 10:45:11.942762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e7b5000 len:0x10000 key:0x183900 00:16:44.374 [2024-11-07 10:45:11.942771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.374 [2024-11-07 10:45:11.942782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e7d6000 len:0x10000 key:0x183900 00:16:44.374 [2024-11-07 10:45:11.942791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.375 [2024-11-07 10:45:11.942801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e7f7000 len:0x10000 key:0x183900 00:16:44.375 [2024-11-07 10:45:11.942810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.375 [2024-11-07 10:45:11.942820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e818000 len:0x10000 key:0x183900 00:16:44.375 [2024-11-07 10:45:11.942829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.375 [2024-11-07 10:45:11.942840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e839000 len:0x10000 key:0x183900 00:16:44.375 [2024-11-07 10:45:11.942860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.375 [2024-11-07 10:45:11.942871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e85a000 len:0x10000 key:0x183900 00:16:44.375 [2024-11-07 10:45:11.942879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.375 [2024-11-07 10:45:11.942889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e87b000 len:0x10000 key:0x183900 00:16:44.375 [2024-11-07 10:45:11.942897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.375 [2024-11-07 10:45:11.942907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e89c000 len:0x10000 key:0x183900 00:16:44.375 [2024-11-07 10:45:11.942915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.375 [2024-11-07 10:45:11.942924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e8bd000 len:0x10000 key:0x183900 00:16:44.375 [2024-11-07 10:45:11.942932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.375 [2024-11-07 10:45:11.942942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e8de000 len:0x10000 key:0x183900 00:16:44.375 [2024-11-07 10:45:11.942951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.375 [2024-11-07 10:45:11.942960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e8ff000 len:0x10000 key:0x183900 00:16:44.375 [2024-11-07 10:45:11.942969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.375 [2024-11-07 10:45:11.942979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dd02000 len:0x10000 key:0x183900 00:16:44.375 [2024-11-07 10:45:11.942988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.375 [2024-11-07 10:45:11.942998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dd23000 len:0x10000 key:0x183900 00:16:44.375 [2024-11-07 10:45:11.943006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.375 [2024-11-07 10:45:11.943016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dd86000 len:0x10000 key:0x183900 00:16:44.375 [2024-11-07 10:45:11.943024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.375 [2024-11-07 10:45:11.943034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dda7000 len:0x10000 key:0x183900 00:16:44.375 [2024-11-07 10:45:11.943042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.375 [2024-11-07 10:45:11.943052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ddc8000 len:0x10000 key:0x183900 00:16:44.375 [2024-11-07 10:45:11.943061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.375 [2024-11-07 10:45:11.943071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dde9000 len:0x10000 key:0x183900 00:16:44.375 [2024-11-07 10:45:11.943079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.375 [2024-11-07 10:45:11.943089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000de0a000 len:0x10000 key:0x183900 00:16:44.375 [2024-11-07 10:45:11.943097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.375 [2024-11-07 10:45:11.943106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000de2b000 len:0x10000 key:0x183900 00:16:44.375 [2024-11-07 10:45:11.943114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.375 [2024-11-07 10:45:11.943124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000de4c000 len:0x10000 key:0x183900 00:16:44.375 [2024-11-07 10:45:11.943133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.375 [2024-11-07 10:45:11.943142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000de6d000 len:0x10000 key:0x183900 00:16:44.375 [2024-11-07 10:45:11.943150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.375 [2024-11-07 10:45:11.943160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000de8e000 len:0x10000 key:0x183900 00:16:44.375 [2024-11-07 10:45:11.943170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.375 [2024-11-07 10:45:11.943180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000deaf000 len:0x10000 key:0x183900 00:16:44.375 [2024-11-07 10:45:11.943188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.375 [2024-11-07 10:45:11.943198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cc1f000 len:0x10000 key:0x183900 00:16:44.375 [2024-11-07 10:45:11.943206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.375 [2024-11-07 10:45:11.943216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cbfe000 len:0x10000 key:0x183900 00:16:44.375 [2024-11-07 10:45:11.943224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.375 [2024-11-07 10:45:11.943234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cbdd000 len:0x10000 key:0x183900 00:16:44.375 [2024-11-07 10:45:11.943242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.375 [2024-11-07 10:45:11.943252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cbbc000 len:0x10000 key:0x183900 00:16:44.375 [2024-11-07 10:45:11.943260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.375 [2024-11-07 10:45:11.945092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100194fb00 len:0x10000 key:0x181f00 00:16:44.375 [2024-11-07 10:45:11.945107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.375 [2024-11-07 10:45:11.945121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100193fa80 len:0x10000 key:0x181f00 00:16:44.375 [2024-11-07 10:45:11.945130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.375 [2024-11-07 10:45:11.945141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100192fa00 len:0x10000 key:0x181f00 00:16:44.375 [2024-11-07 10:45:11.945150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.375 [2024-11-07 10:45:11.945162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100191f980 len:0x10000 key:0x181f00 00:16:44.375 [2024-11-07 10:45:11.945170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.375 [2024-11-07 10:45:11.945181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100190f900 len:0x10000 key:0x181f00 00:16:44.375 [2024-11-07 10:45:11.945190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.375 [2024-11-07 10:45:11.945201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010018ff880 len:0x10000 key:0x181f00 00:16:44.375 [2024-11-07 10:45:11.945209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.375 [2024-11-07 10:45:11.945219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010018ef800 len:0x10000 key:0x181f00 00:16:44.375 [2024-11-07 10:45:11.945228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.375 [2024-11-07 10:45:11.945238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010018df780 len:0x10000 key:0x181f00 00:16:44.375 [2024-11-07 10:45:11.945247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.375 [2024-11-07 10:45:11.945257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010018cf700 len:0x10000 key:0x181f00 00:16:44.375 [2024-11-07 10:45:11.945266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.375 [2024-11-07 10:45:11.945276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010018bf680 len:0x10000 key:0x181f00 00:16:44.375 [2024-11-07 10:45:11.945285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.375 [2024-11-07 10:45:11.945295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010018af600 len:0x10000 key:0x181f00 00:16:44.375 [2024-11-07 10:45:11.945303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.376 [2024-11-07 10:45:11.945316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100189f580 len:0x10000 key:0x181f00 00:16:44.376 [2024-11-07 10:45:11.945325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.376 [2024-11-07 10:45:11.945335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100188f500 len:0x10000 key:0x181f00 00:16:44.376 [2024-11-07 10:45:11.945343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.376 [2024-11-07 10:45:11.945353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100187f480 len:0x10000 key:0x181f00 00:16:44.376 [2024-11-07 10:45:11.945362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.376 [2024-11-07 10:45:11.945372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100186f400 len:0x10000 key:0x181f00 00:16:44.376 [2024-11-07 10:45:11.945381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.376 [2024-11-07 10:45:11.945391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100185f380 len:0x10000 key:0x181f00 00:16:44.376 [2024-11-07 10:45:11.945400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.376 [2024-11-07 10:45:11.945411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100184f300 len:0x10000 key:0x181f00 00:16:44.376 [2024-11-07 10:45:11.945419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.376 [2024-11-07 10:45:11.945430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100183f280 len:0x10000 key:0x181f00 00:16:44.376 [2024-11-07 10:45:11.945438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.376 [2024-11-07 10:45:11.945448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100182f200 len:0x10000 key:0x181f00 00:16:44.376 [2024-11-07 10:45:11.945457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.376 [2024-11-07 10:45:11.945467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100181f180 len:0x10000 key:0x181f00 00:16:44.376 [2024-11-07 10:45:11.945475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.376 [2024-11-07 10:45:11.945485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100180f100 len:0x10000 key:0x181f00 00:16:44.376 [2024-11-07 10:45:11.945494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.376 [2024-11-07 10:45:11.945504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001bf0000 len:0x10000 key:0x181b00 00:16:44.376 [2024-11-07 10:45:11.945517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.376 [2024-11-07 10:45:11.945528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001bdff80 len:0x10000 key:0x181b00 00:16:44.376 [2024-11-07 10:45:11.945538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.376 [2024-11-07 10:45:11.945549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001bcff00 len:0x10000 key:0x181b00 00:16:44.376 [2024-11-07 10:45:11.945558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.376 [2024-11-07 10:45:11.945568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001bbfe80 len:0x10000 key:0x181b00 00:16:44.376 [2024-11-07 10:45:11.945577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.376 [2024-11-07 10:45:11.945587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001bafe00 len:0x10000 key:0x181b00 00:16:44.376 [2024-11-07 10:45:11.945596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.376 [2024-11-07 10:45:11.945606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001b9fd80 len:0x10000 key:0x181b00 00:16:44.376 [2024-11-07 10:45:11.945617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.376 [2024-11-07 10:45:11.945627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001b8fd00 len:0x10000 key:0x181b00 00:16:44.376 [2024-11-07 10:45:11.945636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.376 [2024-11-07 10:45:11.945646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001b7fc80 len:0x10000 key:0x181b00 00:16:44.376 [2024-11-07 10:45:11.945655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.376 [2024-11-07 10:45:11.945665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100178fd00 len:0x10000 key:0x182100 00:16:44.376 [2024-11-07 10:45:11.945674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.376 [2024-11-07 10:45:11.945684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000094d2000 len:0x10000 key:0x183900 00:16:44.376 [2024-11-07 10:45:11.945693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.376 [2024-11-07 10:45:11.945703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000094f3000 len:0x10000 key:0x183900 00:16:44.376 [2024-11-07 10:45:11.945712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.376 [2024-11-07 10:45:11.945723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009514000 len:0x10000 key:0x183900 00:16:44.376 [2024-11-07 10:45:11.945732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.376 [2024-11-07 10:45:11.953644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009535000 len:0x10000 key:0x183900 00:16:44.376 [2024-11-07 10:45:11.953660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.376 [2024-11-07 10:45:11.953674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009556000 len:0x10000 key:0x183900 00:16:44.376 [2024-11-07 10:45:11.953685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.376 [2024-11-07 10:45:11.953699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009577000 len:0x10000 key:0x183900 00:16:44.376 [2024-11-07 10:45:11.953710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.376 [2024-11-07 10:45:11.953723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009598000 len:0x10000 key:0x183900 00:16:44.376 [2024-11-07 10:45:11.953734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.376 [2024-11-07 10:45:11.953747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000095b9000 len:0x10000 key:0x183900 00:16:44.376 [2024-11-07 10:45:11.953758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.376 [2024-11-07 10:45:11.953771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000095da000 len:0x10000 key:0x183900 00:16:44.376 [2024-11-07 10:45:11.953782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.376 [2024-11-07 10:45:11.953795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000095fb000 len:0x10000 key:0x183900 00:16:44.376 [2024-11-07 10:45:11.953806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.376 [2024-11-07 10:45:11.953819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000961c000 len:0x10000 key:0x183900 00:16:44.376 [2024-11-07 10:45:11.953830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.376 [2024-11-07 10:45:11.953843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000963d000 len:0x10000 key:0x183900 00:16:44.376 [2024-11-07 10:45:11.953854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.377 [2024-11-07 10:45:11.953867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be12000 len:0x10000 key:0x183900 00:16:44.377 [2024-11-07 10:45:11.953878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.377 [2024-11-07 10:45:11.953891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be33000 len:0x10000 key:0x183900 00:16:44.377 [2024-11-07 10:45:11.953902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.377 [2024-11-07 10:45:11.953915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a321000 len:0x10000 key:0x183900 00:16:44.377 [2024-11-07 10:45:11.953927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.377 [2024-11-07 10:45:11.953941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a300000 len:0x10000 key:0x183900 00:16:44.377 [2024-11-07 10:45:11.953951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.377 [2024-11-07 10:45:11.953964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bdf1000 len:0x10000 key:0x183900 00:16:44.377 [2024-11-07 10:45:11.953975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.377 [2024-11-07 10:45:11.953988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bdd0000 len:0x10000 key:0x183900 00:16:44.377 [2024-11-07 10:45:11.953999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.377 [2024-11-07 10:45:11.954012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d03f000 len:0x10000 key:0x183900 00:16:44.377 [2024-11-07 10:45:11.954023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.377 [2024-11-07 10:45:11.954036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d01e000 len:0x10000 key:0x183900 00:16:44.377 [2024-11-07 10:45:11.954047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.377 [2024-11-07 10:45:11.954060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cffd000 len:0x10000 key:0x183900 00:16:44.377 [2024-11-07 10:45:11.954071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.377 [2024-11-07 10:45:11.954084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cfdc000 len:0x10000 key:0x183900 00:16:44.377 [2024-11-07 10:45:11.954095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.377 [2024-11-07 10:45:11.954108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cfbb000 len:0x10000 key:0x183900 00:16:44.377 [2024-11-07 10:45:11.954119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.377 [2024-11-07 10:45:11.954132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cf9a000 len:0x10000 key:0x183900 00:16:44.377 [2024-11-07 10:45:11.954143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.377 [2024-11-07 10:45:11.954156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cf79000 len:0x10000 key:0x183900 00:16:44.377 [2024-11-07 10:45:11.954167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.377 [2024-11-07 10:45:11.954180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cf58000 len:0x10000 key:0x183900 00:16:44.377 [2024-11-07 10:45:11.954191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.377 [2024-11-07 10:45:11.954208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cf37000 len:0x10000 key:0x183900 00:16:44.377 [2024-11-07 10:45:11.954219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.377 [2024-11-07 10:45:11.954232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cf16000 len:0x10000 key:0x183900 00:16:44.377 [2024-11-07 10:45:11.954243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.377 [2024-11-07 10:45:11.954256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cef5000 len:0x10000 key:0x183900 00:16:44.377 [2024-11-07 10:45:11.954267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.377 [2024-11-07 10:45:11.954280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ced4000 len:0x10000 key:0x183900 00:16:44.377 [2024-11-07 10:45:11.954291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.377 [2024-11-07 10:45:11.954304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ceb3000 len:0x10000 key:0x183900 00:16:44.377 [2024-11-07 10:45:11.954315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.377 [2024-11-07 10:45:11.954328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce92000 len:0x10000 key:0x183900 00:16:44.377 [2024-11-07 10:45:11.954339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.377 [2024-11-07 10:45:11.954352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f76f000 len:0x10000 key:0x183900 00:16:44.377 [2024-11-07 10:45:11.954363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.377 [2024-11-07 10:45:11.954376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f74e000 len:0x10000 key:0x183900 00:16:44.377 [2024-11-07 10:45:11.954387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.377 [2024-11-07 10:45:11.956715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001a5f380 len:0x10000 key:0x181b00 00:16:44.377 [2024-11-07 10:45:11.956732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.377 [2024-11-07 10:45:11.956749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001a4f300 len:0x10000 key:0x181b00 00:16:44.377 [2024-11-07 10:45:11.956761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.377 [2024-11-07 10:45:11.956774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001a3f280 len:0x10000 key:0x181b00 00:16:44.377 [2024-11-07 10:45:11.956786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.377 [2024-11-07 10:45:11.956800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001a2f200 len:0x10000 key:0x181b00 00:16:44.377 [2024-11-07 10:45:11.956813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.377 [2024-11-07 10:45:11.956827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001a1f180 len:0x10000 key:0x181b00 00:16:44.377 [2024-11-07 10:45:11.956838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.377 [2024-11-07 10:45:11.956852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001a0f100 len:0x10000 key:0x181b00 00:16:44.377 [2024-11-07 10:45:11.956863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.377 [2024-11-07 10:45:11.956876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001df0000 len:0x10000 key:0x181d00 00:16:44.377 [2024-11-07 10:45:11.956887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.377 [2024-11-07 10:45:11.956901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001ddff80 len:0x10000 key:0x181d00 00:16:44.377 [2024-11-07 10:45:11.956912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.377 [2024-11-07 10:45:11.956925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001dcff00 len:0x10000 key:0x181d00 00:16:44.377 [2024-11-07 10:45:11.956937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.377 [2024-11-07 10:45:11.956950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001dbfe80 len:0x10000 key:0x181d00 00:16:44.377 [2024-11-07 10:45:11.956961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.377 [2024-11-07 10:45:11.956974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001dafe00 len:0x10000 key:0x181d00 00:16:44.377 [2024-11-07 10:45:11.956985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.377 [2024-11-07 10:45:11.956998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001d9fd80 len:0x10000 key:0x181d00 00:16:44.377 [2024-11-07 10:45:11.957010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.377 [2024-11-07 10:45:11.957023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001d8fd00 len:0x10000 key:0x181d00 00:16:44.377 [2024-11-07 10:45:11.957034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.377 [2024-11-07 10:45:11.957047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001d7fc80 len:0x10000 key:0x181d00 00:16:44.377 [2024-11-07 10:45:11.957058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.378 [2024-11-07 10:45:11.957071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001d6fc00 len:0x10000 key:0x181d00 00:16:44.378 [2024-11-07 10:45:11.957084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.378 [2024-11-07 10:45:11.957099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001d5fb80 len:0x10000 key:0x181d00 00:16:44.378 [2024-11-07 10:45:11.957110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.378 [2024-11-07 10:45:11.957123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001d4fb00 len:0x10000 key:0x181d00 00:16:44.378 [2024-11-07 10:45:11.957134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.378 [2024-11-07 10:45:11.957147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001d3fa80 len:0x10000 key:0x181d00 00:16:44.378 [2024-11-07 10:45:11.957158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.378 [2024-11-07 10:45:11.957172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001d2fa00 len:0x10000 key:0x181d00 00:16:44.378 [2024-11-07 10:45:11.957183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.378 [2024-11-07 10:45:11.957196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001d1f980 len:0x10000 key:0x181d00 00:16:44.378 [2024-11-07 10:45:11.957207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.378 [2024-11-07 10:45:11.957220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001d0f900 len:0x10000 key:0x181d00 00:16:44.378 [2024-11-07 10:45:11.957231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.378 [2024-11-07 10:45:11.957245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001cff880 len:0x10000 key:0x181d00 00:16:44.378 [2024-11-07 10:45:11.957256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.378 [2024-11-07 10:45:11.957269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001cef800 len:0x10000 key:0x181d00 00:16:44.378 [2024-11-07 10:45:11.957280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.378 [2024-11-07 10:45:11.957293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001cdf780 len:0x10000 key:0x181d00 00:16:44.378 [2024-11-07 10:45:11.957304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.378 [2024-11-07 10:45:11.957317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001ccf700 len:0x10000 key:0x181d00 00:16:44.378 [2024-11-07 10:45:11.957328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.378 [2024-11-07 10:45:11.957342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001cbf680 len:0x10000 key:0x181d00 00:16:44.378 [2024-11-07 10:45:11.957354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.378 [2024-11-07 10:45:11.957368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001caf600 len:0x10000 key:0x181d00 00:16:44.378 [2024-11-07 10:45:11.957379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.378 [2024-11-07 10:45:11.957392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001c9f580 len:0x10000 key:0x181d00 00:16:44.378 [2024-11-07 10:45:11.957403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.378 [2024-11-07 10:45:11.957416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001c8f500 len:0x10000 key:0x181d00 00:16:44.378 [2024-11-07 10:45:11.957427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.378 [2024-11-07 10:45:11.957441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001c7f480 len:0x10000 key:0x181d00 00:16:44.378 [2024-11-07 10:45:11.957451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.378 [2024-11-07 10:45:11.957464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001c6f400 len:0x10000 key:0x181d00 00:16:44.378 [2024-11-07 10:45:11.957476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.378 [2024-11-07 10:45:11.957489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001c5f380 len:0x10000 key:0x181d00 00:16:44.378 [2024-11-07 10:45:11.957500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.378 [2024-11-07 10:45:11.957518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001c4f300 len:0x10000 key:0x181d00 00:16:44.378 [2024-11-07 10:45:11.957530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.378 [2024-11-07 10:45:11.957543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001c3f280 len:0x10000 key:0x181d00 00:16:44.378 [2024-11-07 10:45:11.957554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.378 [2024-11-07 10:45:11.957567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001c2f200 len:0x10000 key:0x181d00 00:16:44.378 [2024-11-07 10:45:11.957578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.378 [2024-11-07 10:45:11.957591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001c1f180 len:0x10000 key:0x181d00 00:16:44.378 [2024-11-07 10:45:11.957602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.378 [2024-11-07 10:45:11.957615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001c0f100 len:0x10000 key:0x181d00 00:16:44.378 [2024-11-07 10:45:11.957626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.378 [2024-11-07 10:45:11.957641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001ff0000 len:0x10000 key:0x183b00 00:16:44.378 [2024-11-07 10:45:11.957653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.378 [2024-11-07 10:45:11.957666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001fdff80 len:0x10000 key:0x183b00 00:16:44.378 [2024-11-07 10:45:11.957677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.378 [2024-11-07 10:45:11.957690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001fcff00 len:0x10000 key:0x183b00 00:16:44.378 [2024-11-07 10:45:11.957701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.378 [2024-11-07 10:45:11.957714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001fbfe80 len:0x10000 key:0x183b00 00:16:44.378 [2024-11-07 10:45:11.957725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.378 [2024-11-07 10:45:11.957738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001fafe00 len:0x10000 key:0x183b00 00:16:44.378 [2024-11-07 10:45:11.957749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.378 [2024-11-07 10:45:11.957762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001f9fd80 len:0x10000 key:0x183b00 00:16:44.378 [2024-11-07 10:45:11.957774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.378 [2024-11-07 10:45:11.957786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001f8fd00 len:0x10000 key:0x183b00 00:16:44.378 [2024-11-07 10:45:11.957798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.378 [2024-11-07 10:45:11.957811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001f7fc80 len:0x10000 key:0x183b00 00:16:44.378 [2024-11-07 10:45:11.957822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.378 [2024-11-07 10:45:11.957835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001f6fc00 len:0x10000 key:0x183b00 00:16:44.378 [2024-11-07 10:45:11.957846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.378 [2024-11-07 10:45:11.957859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001f5fb80 len:0x10000 key:0x183b00 00:16:44.378 [2024-11-07 10:45:11.957870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.378 [2024-11-07 10:45:11.957884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001b6fc00 len:0x10000 key:0x181b00 00:16:44.378 [2024-11-07 10:45:11.957895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.378 [2024-11-07 10:45:11.957909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e920000 len:0x10000 key:0x183900 00:16:44.378 [2024-11-07 10:45:11.957921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.378 [2024-11-07 10:45:11.957934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e941000 len:0x10000 key:0x183900 00:16:44.378 [2024-11-07 10:45:11.957945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.378 [2024-11-07 10:45:11.957958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e962000 len:0x10000 key:0x183900 00:16:44.379 [2024-11-07 10:45:11.957969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.379 [2024-11-07 10:45:11.957982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e983000 len:0x10000 key:0x183900 00:16:44.379 [2024-11-07 10:45:11.957993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.379 [2024-11-07 10:45:11.958006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e9a4000 len:0x10000 key:0x183900 00:16:44.379 [2024-11-07 10:45:11.958018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.379 [2024-11-07 10:45:11.958031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e9c5000 len:0x10000 key:0x183900 00:16:44.379 [2024-11-07 10:45:11.958042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.379 [2024-11-07 10:45:11.958055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e9e6000 len:0x10000 key:0x183900 00:16:44.379 [2024-11-07 10:45:11.958066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.379 [2024-11-07 10:45:11.958079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ea07000 len:0x10000 key:0x183900 00:16:44.379 [2024-11-07 10:45:11.958090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.379 [2024-11-07 10:45:11.958103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ea28000 len:0x10000 key:0x183900 00:16:44.379 [2024-11-07 10:45:11.958114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.379 [2024-11-07 10:45:11.958127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ea49000 len:0x10000 key:0x183900 00:16:44.379 [2024-11-07 10:45:11.958139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.379 [2024-11-07 10:45:11.958152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ea6a000 len:0x10000 key:0x183900 00:16:44.379 [2024-11-07 10:45:11.958163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.379 [2024-11-07 10:45:11.958176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ea8b000 len:0x10000 key:0x183900 00:16:44.379 [2024-11-07 10:45:11.958189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.379 [2024-11-07 10:45:11.958202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000eaac000 len:0x10000 key:0x183900 00:16:44.379 [2024-11-07 10:45:11.958213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.379 [2024-11-07 10:45:11.958226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000eacd000 len:0x10000 key:0x183900 00:16:44.379 [2024-11-07 10:45:11.958237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.379 [2024-11-07 10:45:11.958250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000eaee000 len:0x10000 key:0x183900 00:16:44.379 [2024-11-07 10:45:11.958261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.379 [2024-11-07 10:45:11.958275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000eb0f000 len:0x10000 key:0x183900 00:16:44.379 [2024-11-07 10:45:11.958286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.379 [2024-11-07 10:45:11.960807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100213fa80 len:0x10000 key:0x184300 00:16:44.379 [2024-11-07 10:45:11.960823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.379 [2024-11-07 10:45:11.960840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100212fa00 len:0x10000 key:0x184300 00:16:44.379 [2024-11-07 10:45:11.960851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.379 [2024-11-07 10:45:11.960864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100211f980 len:0x10000 key:0x184300 00:16:44.379 [2024-11-07 10:45:11.960876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.379 [2024-11-07 10:45:11.960889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100210f900 len:0x10000 key:0x184300 00:16:44.379 [2024-11-07 10:45:11.960900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.379 [2024-11-07 10:45:11.960913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010020ff880 len:0x10000 key:0x184300 00:16:44.379 [2024-11-07 10:45:11.960924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.379 [2024-11-07 10:45:11.960938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010020ef800 len:0x10000 key:0x184300 00:16:44.379 [2024-11-07 10:45:11.960949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.379 [2024-11-07 10:45:11.960961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010020df780 len:0x10000 key:0x184300 00:16:44.379 [2024-11-07 10:45:11.960975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.379 [2024-11-07 10:45:11.960988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010020cf700 len:0x10000 key:0x184300 00:16:44.379 [2024-11-07 10:45:11.960999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.379 [2024-11-07 10:45:11.961012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010020bf680 len:0x10000 key:0x184300 00:16:44.379 [2024-11-07 10:45:11.961023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.379 [2024-11-07 10:45:11.961036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010020af600 len:0x10000 key:0x184300 00:16:44.379 [2024-11-07 10:45:11.961046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.379 [2024-11-07 10:45:11.961060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100209f580 len:0x10000 key:0x184300 00:16:44.379 [2024-11-07 10:45:11.961070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.379 [2024-11-07 10:45:11.961083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100208f500 len:0x10000 key:0x184300 00:16:44.379 [2024-11-07 10:45:11.961094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.379 [2024-11-07 10:45:11.961107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100207f480 len:0x10000 key:0x184300 00:16:44.379 [2024-11-07 10:45:11.961118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.379 [2024-11-07 10:45:11.961131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100206f400 len:0x10000 key:0x184300 00:16:44.379 [2024-11-07 10:45:11.961142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.379 [2024-11-07 10:45:11.961155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100205f380 len:0x10000 key:0x184300 00:16:44.379 [2024-11-07 10:45:11.961166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.379 [2024-11-07 10:45:11.961179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100204f300 len:0x10000 key:0x184300 00:16:44.379 [2024-11-07 10:45:11.961190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.379 [2024-11-07 10:45:11.961203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100203f280 len:0x10000 key:0x184300 00:16:44.379 [2024-11-07 10:45:11.961214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.379 [2024-11-07 10:45:11.961227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100202f200 len:0x10000 key:0x184300 00:16:44.379 [2024-11-07 10:45:11.961239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.379 [2024-11-07 10:45:11.961253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100201f180 len:0x10000 key:0x184300 00:16:44.379 [2024-11-07 10:45:11.961264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.379 [2024-11-07 10:45:11.961277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100200f100 len:0x10000 key:0x184300 00:16:44.379 [2024-11-07 10:45:11.961288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.379 [2024-11-07 10:45:11.961301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010023f0000 len:0x10000 key:0x183e00 00:16:44.379 [2024-11-07 10:45:11.961312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.379 [2024-11-07 10:45:11.961325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010023dff80 len:0x10000 key:0x183e00 00:16:44.379 [2024-11-07 10:45:11.961336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.379 [2024-11-07 10:45:11.961349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010023cff00 len:0x10000 key:0x183e00 00:16:44.379 [2024-11-07 10:45:11.961360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.380 [2024-11-07 10:45:11.961373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010023bfe80 len:0x10000 key:0x183e00 00:16:44.380 [2024-11-07 10:45:11.961384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.380 [2024-11-07 10:45:11.961397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010023afe00 len:0x10000 key:0x183e00 00:16:44.380 [2024-11-07 10:45:11.961408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.380 [2024-11-07 10:45:11.961422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100239fd80 len:0x10000 key:0x183e00 00:16:44.380 [2024-11-07 10:45:11.961433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.380 [2024-11-07 10:45:11.961446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100238fd00 len:0x10000 key:0x183e00 00:16:44.380 [2024-11-07 10:45:11.961457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.380 [2024-11-07 10:45:11.961470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100237fc80 len:0x10000 key:0x183e00 00:16:44.380 [2024-11-07 10:45:11.961481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.380 [2024-11-07 10:45:11.961494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100236fc00 len:0x10000 key:0x183e00 00:16:44.380 [2024-11-07 10:45:11.961505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.380 [2024-11-07 10:45:11.961527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100235fb80 len:0x10000 key:0x183e00 00:16:44.380 [2024-11-07 10:45:11.961538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.380 [2024-11-07 10:45:11.961552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100234fb00 len:0x10000 key:0x183e00 00:16:44.380 [2024-11-07 10:45:11.961562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.380 [2024-11-07 10:45:11.961576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100233fa80 len:0x10000 key:0x183e00 00:16:44.380 [2024-11-07 10:45:11.961587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.380 [2024-11-07 10:45:11.961600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201001f4fb00 len:0x10000 key:0x183b00 00:16:44.380 [2024-11-07 10:45:11.961625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.380 [2024-11-07 10:45:11.961639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fbd1000 len:0x10000 key:0x183900 00:16:44.380 [2024-11-07 10:45:11.961651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.380 [2024-11-07 10:45:11.961666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e09e000 len:0x10000 key:0x183900 00:16:44.380 [2024-11-07 10:45:11.961678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.380 [2024-11-07 10:45:11.961693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e07d000 len:0x10000 key:0x183900 00:16:44.380 [2024-11-07 10:45:11.961705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.380 [2024-11-07 10:45:11.961719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e05c000 len:0x10000 key:0x183900 00:16:44.380 [2024-11-07 10:45:11.961732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.380 [2024-11-07 10:45:11.961746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e03b000 len:0x10000 key:0x183900 00:16:44.380 [2024-11-07 10:45:11.961759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.380 [2024-11-07 10:45:11.961774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e01a000 len:0x10000 key:0x183900 00:16:44.380 [2024-11-07 10:45:11.961786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.380 [2024-11-07 10:45:11.961801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dff9000 len:0x10000 key:0x183900 00:16:44.380 [2024-11-07 10:45:11.961813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.380 [2024-11-07 10:45:11.961828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dfd8000 len:0x10000 key:0x183900 00:16:44.380 [2024-11-07 10:45:11.961842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.380 [2024-11-07 10:45:11.961856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dfb7000 len:0x10000 key:0x183900 00:16:44.380 [2024-11-07 10:45:11.961869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.380 [2024-11-07 10:45:11.961884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df96000 len:0x10000 key:0x183900 00:16:44.380 [2024-11-07 10:45:11.961896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.380 [2024-11-07 10:45:11.961910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df75000 len:0x10000 key:0x183900 00:16:44.380 [2024-11-07 10:45:11.961923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.380 [2024-11-07 10:45:11.961938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df54000 len:0x10000 key:0x183900 00:16:44.380 [2024-11-07 10:45:11.961950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.380 [2024-11-07 10:45:11.961964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df33000 len:0x10000 key:0x183900 00:16:44.380 [2024-11-07 10:45:11.961977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.380 [2024-11-07 10:45:11.961991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df12000 len:0x10000 key:0x183900 00:16:44.380 [2024-11-07 10:45:11.962004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.380 [2024-11-07 10:45:11.962019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000def1000 len:0x10000 key:0x183900 00:16:44.380 [2024-11-07 10:45:11.962031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.380 [2024-11-07 10:45:11.962046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ded0000 len:0x10000 key:0x183900 00:16:44.380 [2024-11-07 10:45:11.962058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.380 [2024-11-07 10:45:11.962073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f97f000 len:0x10000 key:0x183900 00:16:44.380 [2024-11-07 10:45:11.962085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.380 [2024-11-07 10:45:11.962100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f95e000 len:0x10000 key:0x183900 00:16:44.380 [2024-11-07 10:45:11.962112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.380 [2024-11-07 10:45:11.962127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f93d000 len:0x10000 key:0x183900 00:16:44.380 [2024-11-07 10:45:11.962141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.380 [2024-11-07 10:45:11.962156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f91c000 len:0x10000 key:0x183900 00:16:44.380 [2024-11-07 10:45:11.962168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.380 [2024-11-07 10:45:11.962183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f8fb000 len:0x10000 key:0x183900 00:16:44.380 [2024-11-07 10:45:11.962195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.380 [2024-11-07 10:45:11.962210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f8da000 len:0x10000 key:0x183900 00:16:44.380 [2024-11-07 10:45:11.962222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.380 [2024-11-07 10:45:11.962237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f8b9000 len:0x10000 key:0x183900 00:16:44.380 [2024-11-07 10:45:11.962249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.380 [2024-11-07 10:45:11.962264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f898000 len:0x10000 key:0x183900 00:16:44.380 [2024-11-07 10:45:11.962276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.380 [2024-11-07 10:45:11.962291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f877000 len:0x10000 key:0x183900 00:16:44.381 [2024-11-07 10:45:11.962304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.381 [2024-11-07 10:45:11.962319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f856000 len:0x10000 key:0x183900 00:16:44.381 [2024-11-07 10:45:11.962331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.381 [2024-11-07 10:45:11.962346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f835000 len:0x10000 key:0x183900 00:16:44.381 [2024-11-07 10:45:11.962358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.381 [2024-11-07 10:45:11.962373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f814000 len:0x10000 key:0x183900 00:16:44.381 [2024-11-07 10:45:11.962385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.381 [2024-11-07 10:45:11.962400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f7f3000 len:0x10000 key:0x183900 00:16:44.381 [2024-11-07 10:45:11.962412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.381 [2024-11-07 10:45:11.962427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f7d2000 len:0x10000 key:0x183900 00:16:44.381 [2024-11-07 10:45:11.962439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.381 [2024-11-07 10:45:11.962455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f7b1000 len:0x10000 key:0x183900 00:16:44.381 [2024-11-07 10:45:11.962468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.381 [2024-11-07 10:45:11.964626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100250f900 len:0x10000 key:0x184100 00:16:44.381 [2024-11-07 10:45:11.964645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.381 [2024-11-07 10:45:11.964663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010024ff880 len:0x10000 key:0x184100 00:16:44.381 [2024-11-07 10:45:11.964676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.381 [2024-11-07 10:45:11.964690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010024ef800 len:0x10000 key:0x184100 00:16:44.381 [2024-11-07 10:45:11.964703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.381 [2024-11-07 10:45:11.964718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010024df780 len:0x10000 key:0x184100 00:16:44.381 [2024-11-07 10:45:11.964730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.381 [2024-11-07 10:45:11.964745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010024cf700 len:0x10000 key:0x184100 00:16:44.381 [2024-11-07 10:45:11.964757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.381 [2024-11-07 10:45:11.964772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010024bf680 len:0x10000 key:0x184100 00:16:44.381 [2024-11-07 10:45:11.964784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.381 [2024-11-07 10:45:11.964799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010024af600 len:0x10000 key:0x184100 00:16:44.381 [2024-11-07 10:45:11.964811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.381 [2024-11-07 10:45:11.964826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100249f580 len:0x10000 key:0x184100 00:16:44.381 [2024-11-07 10:45:11.964838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.381 [2024-11-07 10:45:11.964853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100248f500 len:0x10000 key:0x184100 00:16:44.381 [2024-11-07 10:45:11.964865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.381 [2024-11-07 10:45:11.964880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100247f480 len:0x10000 key:0x184100 00:16:44.381 [2024-11-07 10:45:11.964893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.381 [2024-11-07 10:45:11.964910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100246f400 len:0x10000 key:0x184100 00:16:44.381 [2024-11-07 10:45:11.964923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.381 [2024-11-07 10:45:11.964937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100245f380 len:0x10000 key:0x184100 00:16:44.381 [2024-11-07 10:45:11.964950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.381 [2024-11-07 10:45:11.964964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100244f300 len:0x10000 key:0x184100 00:16:44.381 [2024-11-07 10:45:11.964977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.381 [2024-11-07 10:45:11.964991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100243f280 len:0x10000 key:0x184100 00:16:44.381 [2024-11-07 10:45:11.965004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.381 [2024-11-07 10:45:11.965019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100242f200 len:0x10000 key:0x184100 00:16:44.381 [2024-11-07 10:45:11.965031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.381 [2024-11-07 10:45:11.965046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100241f180 len:0x10000 key:0x184100 00:16:44.381 [2024-11-07 10:45:11.965058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.381 [2024-11-07 10:45:11.965073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100240f100 len:0x10000 key:0x184100 00:16:44.381 [2024-11-07 10:45:11.965085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.381 [2024-11-07 10:45:11.965100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010027f0000 len:0x10000 key:0x184900 00:16:44.381 [2024-11-07 10:45:11.965112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.381 [2024-11-07 10:45:11.965127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010027dff80 len:0x10000 key:0x184900 00:16:44.381 [2024-11-07 10:45:11.965139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.381 [2024-11-07 10:45:11.965154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010027cff00 len:0x10000 key:0x184900 00:16:44.381 [2024-11-07 10:45:11.965166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.381 [2024-11-07 10:45:11.965181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010027bfe80 len:0x10000 key:0x184900 00:16:44.381 [2024-11-07 10:45:11.965193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.381 [2024-11-07 10:45:11.965208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010027afe00 len:0x10000 key:0x184900 00:16:44.381 [2024-11-07 10:45:11.965223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.381 [2024-11-07 10:45:11.965237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100279fd80 len:0x10000 key:0x184900 00:16:44.381 [2024-11-07 10:45:11.965250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.381 [2024-11-07 10:45:11.965264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100278fd00 len:0x10000 key:0x184900 00:16:44.381 [2024-11-07 10:45:11.965276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.381 [2024-11-07 10:45:11.965291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100277fc80 len:0x10000 key:0x184900 00:16:44.381 [2024-11-07 10:45:11.965304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.381 [2024-11-07 10:45:11.965318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100276fc00 len:0x10000 key:0x184900 00:16:44.381 [2024-11-07 10:45:11.965331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.381 [2024-11-07 10:45:11.965345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100275fb80 len:0x10000 key:0x184900 00:16:44.381 [2024-11-07 10:45:11.965358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.382 [2024-11-07 10:45:11.965372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100274fb00 len:0x10000 key:0x184900 00:16:44.382 [2024-11-07 10:45:11.965385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.382 [2024-11-07 10:45:11.965399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100273fa80 len:0x10000 key:0x184900 00:16:44.382 [2024-11-07 10:45:11.965412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.382 [2024-11-07 10:45:11.965426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100272fa00 len:0x10000 key:0x184900 00:16:44.382 [2024-11-07 10:45:11.965439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.382 [2024-11-07 10:45:11.965454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100271f980 len:0x10000 key:0x184900 00:16:44.382 [2024-11-07 10:45:11.965467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.382 [2024-11-07 10:45:11.965482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100232fa00 len:0x10000 key:0x183e00 00:16:44.382 [2024-11-07 10:45:11.965494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.382 [2024-11-07 10:45:11.965515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c6d6000 len:0x10000 key:0x183900 00:16:44.382 [2024-11-07 10:45:11.965532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.382 [2024-11-07 10:45:11.965547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c6f7000 len:0x10000 key:0x183900 00:16:44.382 [2024-11-07 10:45:11.965560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.382 [2024-11-07 10:45:11.965575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c718000 len:0x10000 key:0x183900 00:16:44.382 [2024-11-07 10:45:11.965587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.382 [2024-11-07 10:45:11.965602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c739000 len:0x10000 key:0x183900 00:16:44.382 [2024-11-07 10:45:11.965614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.382 [2024-11-07 10:45:11.965629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c75a000 len:0x10000 key:0x183900 00:16:44.382 [2024-11-07 10:45:11.965641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.382 [2024-11-07 10:45:11.965656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c77b000 len:0x10000 key:0x183900 00:16:44.382 [2024-11-07 10:45:11.965669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.382 [2024-11-07 10:45:11.965683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c79c000 len:0x10000 key:0x183900 00:16:44.382 [2024-11-07 10:45:11.965696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.382 [2024-11-07 10:45:11.965710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c7bd000 len:0x10000 key:0x183900 00:16:44.382 [2024-11-07 10:45:11.965723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.382 [2024-11-07 10:45:11.965737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c7de000 len:0x10000 key:0x183900 00:16:44.382 [2024-11-07 10:45:11.965750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.382 [2024-11-07 10:45:11.965764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c7ff000 len:0x10000 key:0x183900 00:16:44.382 [2024-11-07 10:45:11.965777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.382 [2024-11-07 10:45:11.965792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c6b5000 len:0x10000 key:0x183900 00:16:44.382 [2024-11-07 10:45:11.965804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.382 [2024-11-07 10:45:11.965818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f0bb000 len:0x10000 key:0x183900 00:16:44.382 [2024-11-07 10:45:11.965831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.382 [2024-11-07 10:45:11.965847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f0dc000 len:0x10000 key:0x183900 00:16:44.382 [2024-11-07 10:45:11.965860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.382 [2024-11-07 10:45:11.965874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f0fd000 len:0x10000 key:0x183900 00:16:44.382 [2024-11-07 10:45:11.965887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.382 [2024-11-07 10:45:11.965901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f11e000 len:0x10000 key:0x183900 00:16:44.382 [2024-11-07 10:45:11.965914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.382 [2024-11-07 10:45:11.965928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f13f000 len:0x10000 key:0x183900 00:16:44.382 [2024-11-07 10:45:11.965941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.382 [2024-11-07 10:45:11.965955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009b02000 len:0x10000 key:0x183900 00:16:44.382 [2024-11-07 10:45:11.965968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.382 [2024-11-07 10:45:11.965982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009b23000 len:0x10000 key:0x183900 00:16:44.382 [2024-11-07 10:45:11.965994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.382 [2024-11-07 10:45:11.966009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009b44000 len:0x10000 key:0x183900 00:16:44.382 [2024-11-07 10:45:11.966021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.382 [2024-11-07 10:45:11.966036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009b65000 len:0x10000 key:0x183900 00:16:44.382 [2024-11-07 10:45:11.966048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.382 [2024-11-07 10:45:11.966063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009b86000 len:0x10000 key:0x183900 00:16:44.382 [2024-11-07 10:45:11.966075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.382 [2024-11-07 10:45:11.966090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009ba7000 len:0x10000 key:0x183900 00:16:44.382 [2024-11-07 10:45:11.966102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.382 [2024-11-07 10:45:11.966117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009bc8000 len:0x10000 key:0x183900 00:16:44.382 [2024-11-07 10:45:11.966129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.382 [2024-11-07 10:45:11.966146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009be9000 len:0x10000 key:0x183900 00:16:44.382 [2024-11-07 10:45:11.966158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.382 [2024-11-07 10:45:11.966173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009c0a000 len:0x10000 key:0x183900 00:16:44.382 [2024-11-07 10:45:11.966185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.382 [2024-11-07 10:45:11.966200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009c2b000 len:0x10000 key:0x183900 00:16:44.382 [2024-11-07 10:45:11.966212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.382 [2024-11-07 10:45:11.966227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009c4c000 len:0x10000 key:0x183900 00:16:44.382 [2024-11-07 10:45:11.966239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.382 [2024-11-07 10:45:11.966253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009c6d000 len:0x10000 key:0x183900 00:16:44.382 [2024-11-07 10:45:11.966266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.382 [2024-11-07 10:45:11.966280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009c8e000 len:0x10000 key:0x183900 00:16:44.382 [2024-11-07 10:45:11.966293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.382 [2024-11-07 10:45:11.966307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ed61000 len:0x10000 key:0x183900 00:16:44.382 [2024-11-07 10:45:11.966320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.382 [2024-11-07 10:45:11.966334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000eb30000 len:0x10000 key:0x183900 00:16:44.382 [2024-11-07 10:45:11.966347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.383 [2024-11-07 10:45:11.966361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fb8f000 len:0x10000 key:0x183900 00:16:44.383 [2024-11-07 10:45:11.966374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.383 [2024-11-07 10:45:11.968632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100290f900 len:0x10000 key:0x182d00 00:16:44.383 [2024-11-07 10:45:11.968650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.383 [2024-11-07 10:45:11.968667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010028ff880 len:0x10000 key:0x182d00 00:16:44.383 [2024-11-07 10:45:11.968680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.383 [2024-11-07 10:45:11.968695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010028ef800 len:0x10000 key:0x182d00 00:16:44.383 [2024-11-07 10:45:11.968710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.383 [2024-11-07 10:45:11.968725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010028df780 len:0x10000 key:0x182d00 00:16:44.383 [2024-11-07 10:45:11.968738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.383 [2024-11-07 10:45:11.968752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010028cf700 len:0x10000 key:0x182d00 00:16:44.383 [2024-11-07 10:45:11.968765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.383 [2024-11-07 10:45:11.968779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010028bf680 len:0x10000 key:0x182d00 00:16:44.383 [2024-11-07 10:45:11.968792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.383 [2024-11-07 10:45:11.968806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010028af600 len:0x10000 key:0x182d00 00:16:44.383 [2024-11-07 10:45:11.968818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.383 [2024-11-07 10:45:11.968833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100289f580 len:0x10000 key:0x182d00 00:16:44.383 [2024-11-07 10:45:11.968845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.383 [2024-11-07 10:45:11.968860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100288f500 len:0x10000 key:0x182d00 00:16:44.383 [2024-11-07 10:45:11.968872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.383 [2024-11-07 10:45:11.968887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100287f480 len:0x10000 key:0x182d00 00:16:44.383 [2024-11-07 10:45:11.968899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.383 [2024-11-07 10:45:11.968914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100286f400 len:0x10000 key:0x182d00 00:16:44.383 [2024-11-07 10:45:11.968926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.383 [2024-11-07 10:45:11.968941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100285f380 len:0x10000 key:0x182d00 00:16:44.383 [2024-11-07 10:45:11.968953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.383 [2024-11-07 10:45:11.968968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100284f300 len:0x10000 key:0x182d00 00:16:44.383 [2024-11-07 10:45:11.968980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.383 [2024-11-07 10:45:11.968995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100283f280 len:0x10000 key:0x182d00 00:16:44.383 [2024-11-07 10:45:11.969009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.383 [2024-11-07 10:45:11.969024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100282f200 len:0x10000 key:0x182d00 00:16:44.383 [2024-11-07 10:45:11.969036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.383 [2024-11-07 10:45:11.969051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100281f180 len:0x10000 key:0x182d00 00:16:44.383 [2024-11-07 10:45:11.969063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.383 [2024-11-07 10:45:11.969078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100280f100 len:0x10000 key:0x182d00 00:16:44.383 [2024-11-07 10:45:11.969090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.383 [2024-11-07 10:45:11.969105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002bf0000 len:0x10000 key:0x184d00 00:16:44.383 [2024-11-07 10:45:11.969117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.383 [2024-11-07 10:45:11.969131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002bdff80 len:0x10000 key:0x184d00 00:16:44.383 [2024-11-07 10:45:11.969143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.383 [2024-11-07 10:45:11.969158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002bcff00 len:0x10000 key:0x184d00 00:16:44.383 [2024-11-07 10:45:11.969171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.383 [2024-11-07 10:45:11.969185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002bbfe80 len:0x10000 key:0x184d00 00:16:44.383 [2024-11-07 10:45:11.969197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.383 [2024-11-07 10:45:11.969212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002bafe00 len:0x10000 key:0x184d00 00:16:44.383 [2024-11-07 10:45:11.969224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.383 [2024-11-07 10:45:11.969239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b9fd80 len:0x10000 key:0x184d00 00:16:44.383 [2024-11-07 10:45:11.969251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.383 [2024-11-07 10:45:11.969266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b8fd00 len:0x10000 key:0x184d00 00:16:44.383 [2024-11-07 10:45:11.969278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.383 [2024-11-07 10:45:11.969293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b7fc80 len:0x10000 key:0x184d00 00:16:44.383 [2024-11-07 10:45:11.969305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.383 [2024-11-07 10:45:11.969322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b6fc00 len:0x10000 key:0x184d00 00:16:44.383 [2024-11-07 10:45:11.969334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.383 [2024-11-07 10:45:11.969349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b5fb80 len:0x10000 key:0x184d00 00:16:44.383 [2024-11-07 10:45:11.969361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.383 [2024-11-07 10:45:11.969376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b4fb00 len:0x10000 key:0x184d00 00:16:44.383 [2024-11-07 10:45:11.969388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.383 [2024-11-07 10:45:11.969403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b3fa80 len:0x10000 key:0x184d00 00:16:44.383 [2024-11-07 10:45:11.969415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.383 [2024-11-07 10:45:11.969429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b2fa00 len:0x10000 key:0x184d00 00:16:44.383 [2024-11-07 10:45:11.969441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.383 [2024-11-07 10:45:11.969456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b1f980 len:0x10000 key:0x184d00 00:16:44.383 [2024-11-07 10:45:11.969468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.383 [2024-11-07 10:45:11.969483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002b0f900 len:0x10000 key:0x184d00 00:16:44.383 [2024-11-07 10:45:11.969495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.383 [2024-11-07 10:45:11.969515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002aff880 len:0x10000 key:0x184d00 00:16:44.383 [2024-11-07 10:45:11.969528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.383 [2024-11-07 10:45:11.969542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100270f900 len:0x10000 key:0x184900 00:16:44.383 [2024-11-07 10:45:11.969555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.383 [2024-11-07 10:45:11.969569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b1b2000 len:0x10000 key:0x183900 00:16:44.383 [2024-11-07 10:45:11.969581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.384 [2024-11-07 10:45:11.969596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b191000 len:0x10000 key:0x183900 00:16:44.384 [2024-11-07 10:45:11.969608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.384 [2024-11-07 10:45:11.969625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010264000 len:0x10000 key:0x183900 00:16:44.384 [2024-11-07 10:45:11.969637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.384 [2024-11-07 10:45:11.969651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010285000 len:0x10000 key:0x183900 00:16:44.384 [2024-11-07 10:45:11.969663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.384 [2024-11-07 10:45:11.969678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000102a6000 len:0x10000 key:0x183900 00:16:44.384 [2024-11-07 10:45:11.969691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.384 [2024-11-07 10:45:11.969705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000102c7000 len:0x10000 key:0x183900 00:16:44.384 [2024-11-07 10:45:11.969717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.384 [2024-11-07 10:45:11.969732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000102e8000 len:0x10000 key:0x183900 00:16:44.384 [2024-11-07 10:45:11.969745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.384 [2024-11-07 10:45:11.969759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010309000 len:0x10000 key:0x183900 00:16:44.384 [2024-11-07 10:45:11.969771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.384 [2024-11-07 10:45:11.969786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001032a000 len:0x10000 key:0x183900 00:16:44.384 [2024-11-07 10:45:11.969798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.384 [2024-11-07 10:45:11.969812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001034b000 len:0x10000 key:0x183900 00:16:44.384 [2024-11-07 10:45:11.969825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.384 [2024-11-07 10:45:11.969839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001036c000 len:0x10000 key:0x183900 00:16:44.384 [2024-11-07 10:45:11.969851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.384 [2024-11-07 10:45:11.969866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001038d000 len:0x10000 key:0x183900 00:16:44.384 [2024-11-07 10:45:11.969878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.384 [2024-11-07 10:45:11.969893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000103ae000 len:0x10000 key:0x183900 00:16:44.384 [2024-11-07 10:45:11.969905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.384 [2024-11-07 10:45:11.969919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d87f000 len:0x10000 key:0x183900 00:16:44.384 [2024-11-07 10:45:11.969934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.384 [2024-11-07 10:45:11.969948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008e60000 len:0x10000 key:0x183900 00:16:44.384 [2024-11-07 10:45:11.969961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.384 [2024-11-07 10:45:11.969975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b35f000 len:0x10000 key:0x183900 00:16:44.384 [2024-11-07 10:45:11.969987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.384 [2024-11-07 10:45:11.970003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fb6e000 len:0x10000 key:0x183900 00:16:44.384 [2024-11-07 10:45:11.970015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.384 [2024-11-07 10:45:11.970030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fb4d000 len:0x10000 key:0x183900 00:16:44.384 [2024-11-07 10:45:11.970042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.384 [2024-11-07 10:45:11.970057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fb2c000 len:0x10000 key:0x183900 00:16:44.384 [2024-11-07 10:45:11.970069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.384 [2024-11-07 10:45:11.970083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fb0b000 len:0x10000 key:0x183900 00:16:44.384 [2024-11-07 10:45:11.970095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.384 [2024-11-07 10:45:11.970110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000faea000 len:0x10000 key:0x183900 00:16:44.384 [2024-11-07 10:45:11.970122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.384 [2024-11-07 10:45:11.970137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fac9000 len:0x10000 key:0x183900 00:16:44.384 [2024-11-07 10:45:11.970149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.384 [2024-11-07 10:45:11.970164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000faa8000 len:0x10000 key:0x183900 00:16:44.384 [2024-11-07 10:45:11.970176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.384 [2024-11-07 10:45:11.970191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fa87000 len:0x10000 key:0x183900 00:16:44.384 [2024-11-07 10:45:11.970203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.384 [2024-11-07 10:45:11.970217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fa66000 len:0x10000 key:0x183900 00:16:44.384 [2024-11-07 10:45:11.970232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.384 [2024-11-07 10:45:11.970246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fa45000 len:0x10000 key:0x183900 00:16:44.384 [2024-11-07 10:45:11.970258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.384 [2024-11-07 10:45:11.970273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fa24000 len:0x10000 key:0x183900 00:16:44.384 [2024-11-07 10:45:11.970285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.384 [2024-11-07 10:45:11.970300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fa03000 len:0x10000 key:0x183900 00:16:44.384 [2024-11-07 10:45:11.970312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.384 [2024-11-07 10:45:11.970326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f9e2000 len:0x10000 key:0x183900 00:16:44.384 [2024-11-07 10:45:11.970339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.384 [2024-11-07 10:45:11.970353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f9c1000 len:0x10000 key:0x183900 00:16:44.384 [2024-11-07 10:45:11.970366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.384 [2024-11-07 10:45:11.972734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002dcff00 len:0x10000 key:0x183f00 00:16:44.384 [2024-11-07 10:45:11.972780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.384 [2024-11-07 10:45:11.972822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002dbfe80 len:0x10000 key:0x183f00 00:16:44.384 [2024-11-07 10:45:11.972853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.384 [2024-11-07 10:45:11.972890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002dafe00 len:0x10000 key:0x183f00 00:16:44.384 [2024-11-07 10:45:11.972921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.384 [2024-11-07 10:45:11.972957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d9fd80 len:0x10000 key:0x183f00 00:16:44.384 [2024-11-07 10:45:11.972988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.384 [2024-11-07 10:45:11.973024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d8fd00 len:0x10000 key:0x183f00 00:16:44.384 [2024-11-07 10:45:11.973056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.384 [2024-11-07 10:45:11.973091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d7fc80 len:0x10000 key:0x183f00 00:16:44.384 [2024-11-07 10:45:11.973122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.384 [2024-11-07 10:45:11.973165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d6fc00 len:0x10000 key:0x183f00 00:16:44.384 [2024-11-07 10:45:11.973197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.384 [2024-11-07 10:45:11.973233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d5fb80 len:0x10000 key:0x183f00 00:16:44.384 [2024-11-07 10:45:11.973264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.385 [2024-11-07 10:45:11.973300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d4fb00 len:0x10000 key:0x183f00 00:16:44.385 [2024-11-07 10:45:11.973330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.385 [2024-11-07 10:45:11.973344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d3fa80 len:0x10000 key:0x183f00 00:16:44.385 [2024-11-07 10:45:11.973357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.385 [2024-11-07 10:45:11.973371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d2fa00 len:0x10000 key:0x183f00 00:16:44.385 [2024-11-07 10:45:11.973384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.385 [2024-11-07 10:45:11.973398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d1f980 len:0x10000 key:0x183f00 00:16:44.385 [2024-11-07 10:45:11.973410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.385 [2024-11-07 10:45:11.973425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d0f900 len:0x10000 key:0x183f00 00:16:44.385 [2024-11-07 10:45:11.973437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.385 [2024-11-07 10:45:11.973452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002cff880 len:0x10000 key:0x183f00 00:16:44.385 [2024-11-07 10:45:11.973464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.385 [2024-11-07 10:45:11.973479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002cef800 len:0x10000 key:0x183f00 00:16:44.385 [2024-11-07 10:45:11.973491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.385 [2024-11-07 10:45:11.973512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002cdf780 len:0x10000 key:0x183f00 00:16:44.385 [2024-11-07 10:45:11.973525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.385 [2024-11-07 10:45:11.973540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ccf700 len:0x10000 key:0x183f00 00:16:44.385 [2024-11-07 10:45:11.973552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.385 [2024-11-07 10:45:11.973569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002cbf680 len:0x10000 key:0x183f00 00:16:44.385 [2024-11-07 10:45:11.973582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.385 [2024-11-07 10:45:11.973597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002caf600 len:0x10000 key:0x183f00 00:16:44.385 [2024-11-07 10:45:11.973609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.385 [2024-11-07 10:45:11.973624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c9f580 len:0x10000 key:0x183f00 00:16:44.385 [2024-11-07 10:45:11.973637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.385 [2024-11-07 10:45:11.973651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c8f500 len:0x10000 key:0x183f00 00:16:44.385 [2024-11-07 10:45:11.973664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.385 [2024-11-07 10:45:11.973678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c7f480 len:0x10000 key:0x183f00 00:16:44.385 [2024-11-07 10:45:11.973691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.385 [2024-11-07 10:45:11.973705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c6f400 len:0x10000 key:0x183f00 00:16:44.385 [2024-11-07 10:45:11.973718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.385 [2024-11-07 10:45:11.973732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c5f380 len:0x10000 key:0x183f00 00:16:44.385 [2024-11-07 10:45:11.973744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.385 [2024-11-07 10:45:11.973759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c4f300 len:0x10000 key:0x183f00 00:16:44.385 [2024-11-07 10:45:11.973772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.385 [2024-11-07 10:45:11.973786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c3f280 len:0x10000 key:0x183f00 00:16:44.385 [2024-11-07 10:45:11.973798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.385 [2024-11-07 10:45:11.973813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c2f200 len:0x10000 key:0x183f00 00:16:44.385 [2024-11-07 10:45:11.973825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.385 [2024-11-07 10:45:11.973840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c1f180 len:0x10000 key:0x183f00 00:16:44.385 [2024-11-07 10:45:11.973852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.385 [2024-11-07 10:45:11.973868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c0f100 len:0x10000 key:0x183f00 00:16:44.385 [2024-11-07 10:45:11.973881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.385 [2024-11-07 10:45:11.973896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ff0000 len:0x10000 key:0x184200 00:16:44.385 [2024-11-07 10:45:11.973908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.385 [2024-11-07 10:45:11.973923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002fdff80 len:0x10000 key:0x184200 00:16:44.385 [2024-11-07 10:45:11.973935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.385 [2024-11-07 10:45:11.973950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002fcff00 len:0x10000 key:0x184200 00:16:44.385 [2024-11-07 10:45:11.973962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.385 [2024-11-07 10:45:11.973977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002fbfe80 len:0x10000 key:0x184200 00:16:44.385 [2024-11-07 10:45:11.973989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.385 [2024-11-07 10:45:11.974003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002fafe00 len:0x10000 key:0x184200 00:16:44.385 [2024-11-07 10:45:11.974016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.385 [2024-11-07 10:45:11.974031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f9fd80 len:0x10000 key:0x184200 00:16:44.385 [2024-11-07 10:45:11.974043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.385 [2024-11-07 10:45:11.974058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f8fd00 len:0x10000 key:0x184200 00:16:44.385 [2024-11-07 10:45:11.974070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.385 [2024-11-07 10:45:11.974085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f7fc80 len:0x10000 key:0x184200 00:16:44.385 [2024-11-07 10:45:11.974097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.385 [2024-11-07 10:45:11.974112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f6fc00 len:0x10000 key:0x184200 00:16:44.385 [2024-11-07 10:45:11.974124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.385 [2024-11-07 10:45:11.974139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f5fb80 len:0x10000 key:0x184200 00:16:44.386 [2024-11-07 10:45:11.974151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.386 [2024-11-07 10:45:11.974166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f4fb00 len:0x10000 key:0x184200 00:16:44.386 [2024-11-07 10:45:11.974181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.386 [2024-11-07 10:45:11.974195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f3fa80 len:0x10000 key:0x184200 00:16:44.386 [2024-11-07 10:45:11.974208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.386 [2024-11-07 10:45:11.974222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f2fa00 len:0x10000 key:0x184200 00:16:44.386 [2024-11-07 10:45:11.974235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.386 [2024-11-07 10:45:11.974250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f1f980 len:0x10000 key:0x184200 00:16:44.386 [2024-11-07 10:45:11.974262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.386 [2024-11-07 10:45:11.974276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f0f900 len:0x10000 key:0x184200 00:16:44.386 [2024-11-07 10:45:11.974289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.386 [2024-11-07 10:45:11.974303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002eff880 len:0x10000 key:0x184200 00:16:44.386 [2024-11-07 10:45:11.974316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.386 [2024-11-07 10:45:11.974330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002eef800 len:0x10000 key:0x184200 00:16:44.386 [2024-11-07 10:45:11.974343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.386 [2024-11-07 10:45:11.974358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002edf780 len:0x10000 key:0x184200 00:16:44.386 [2024-11-07 10:45:11.974370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.386 [2024-11-07 10:45:11.974384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002aef800 len:0x10000 key:0x184d00 00:16:44.386 [2024-11-07 10:45:11.974397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.386 [2024-11-07 10:45:11.974412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f370000 len:0x10000 key:0x183900 00:16:44.386 [2024-11-07 10:45:11.974424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.386 [2024-11-07 10:45:11.974439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f391000 len:0x10000 key:0x183900 00:16:44.386 [2024-11-07 10:45:11.974451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.386 [2024-11-07 10:45:11.974466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f3b2000 len:0x10000 key:0x183900 00:16:44.386 [2024-11-07 10:45:11.974480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.386 [2024-11-07 10:45:11.974494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f3d3000 len:0x10000 key:0x183900 00:16:44.386 [2024-11-07 10:45:11.974512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.386 [2024-11-07 10:45:11.974527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f3f4000 len:0x10000 key:0x183900 00:16:44.386 [2024-11-07 10:45:11.974539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.386 [2024-11-07 10:45:11.974554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f415000 len:0x10000 key:0x183900 00:16:44.386 [2024-11-07 10:45:11.974567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.386 [2024-11-07 10:45:11.974581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f436000 len:0x10000 key:0x183900 00:16:44.386 [2024-11-07 10:45:11.974594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.386 [2024-11-07 10:45:11.974608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f457000 len:0x10000 key:0x183900 00:16:44.386 [2024-11-07 10:45:11.974621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.386 [2024-11-07 10:45:11.974635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f478000 len:0x10000 key:0x183900 00:16:44.386 [2024-11-07 10:45:11.974647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.386 [2024-11-07 10:45:11.974662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f499000 len:0x10000 key:0x183900 00:16:44.386 [2024-11-07 10:45:11.974675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.386 [2024-11-07 10:45:11.974690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f4ba000 len:0x10000 key:0x183900 00:16:44.386 [2024-11-07 10:45:11.974702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.386 [2024-11-07 10:45:11.974716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f4db000 len:0x10000 key:0x183900 00:16:44.386 [2024-11-07 10:45:11.974729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.386 [2024-11-07 10:45:11.974743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f4fc000 len:0x10000 key:0x183900 00:16:44.386 [2024-11-07 10:45:11.974756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.386 [2024-11-07 10:45:11.974770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f51d000 len:0x10000 key:0x183900 00:16:44.386 [2024-11-07 10:45:11.974783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.386 [2024-11-07 10:45:11.974799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f53e000 len:0x10000 key:0x183900 00:16:44.386 [2024-11-07 10:45:11.974812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.386 [2024-11-07 10:45:11.974826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f55f000 len:0x10000 key:0x183900 00:16:44.386 [2024-11-07 10:45:11.974839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.386 [2024-11-07 10:45:11.977379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ebf680 len:0x10000 key:0x184200 00:16:44.386 [2024-11-07 10:45:11.977423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.386 [2024-11-07 10:45:11.977465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002eaf600 len:0x10000 key:0x184200 00:16:44.386 [2024-11-07 10:45:11.977496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.386 [2024-11-07 10:45:11.977545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e9f580 len:0x10000 key:0x184200 00:16:44.386 [2024-11-07 10:45:11.977575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.386 [2024-11-07 10:45:11.977611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e8f500 len:0x10000 key:0x184200 00:16:44.386 [2024-11-07 10:45:11.977642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.386 [2024-11-07 10:45:11.977677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e7f480 len:0x10000 key:0x184200 00:16:44.386 [2024-11-07 10:45:11.977708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.386 [2024-11-07 10:45:11.977744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e6f400 len:0x10000 key:0x184200 00:16:44.386 [2024-11-07 10:45:11.977775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.386 [2024-11-07 10:45:11.977811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e5f380 len:0x10000 key:0x184200 00:16:44.386 [2024-11-07 10:45:11.977842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.386 [2024-11-07 10:45:11.977878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e4f300 len:0x10000 key:0x184200 00:16:44.386 [2024-11-07 10:45:11.977908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.386 [2024-11-07 10:45:11.977945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e3f280 len:0x10000 key:0x184200 00:16:44.386 [2024-11-07 10:45:11.977976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.386 [2024-11-07 10:45:11.978024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e2f200 len:0x10000 key:0x184200 00:16:44.386 [2024-11-07 10:45:11.978055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.386 [2024-11-07 10:45:11.978102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e1f180 len:0x10000 key:0x184200 00:16:44.387 [2024-11-07 10:45:11.978115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.387 [2024-11-07 10:45:11.978130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e0f100 len:0x10000 key:0x184200 00:16:44.387 [2024-11-07 10:45:11.978142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.387 [2024-11-07 10:45:11.978156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031f0000 len:0x10000 key:0x184b00 00:16:44.387 [2024-11-07 10:45:11.978169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.387 [2024-11-07 10:45:11.978184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031dff80 len:0x10000 key:0x184b00 00:16:44.387 [2024-11-07 10:45:11.978196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.387 [2024-11-07 10:45:11.978211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031cff00 len:0x10000 key:0x184b00 00:16:44.387 [2024-11-07 10:45:11.978223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.387 [2024-11-07 10:45:11.978238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031bfe80 len:0x10000 key:0x184b00 00:16:44.387 [2024-11-07 10:45:11.978250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.387 [2024-11-07 10:45:11.978265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031afe00 len:0x10000 key:0x184b00 00:16:44.387 [2024-11-07 10:45:11.978278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.387 [2024-11-07 10:45:11.978292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100319fd80 len:0x10000 key:0x184b00 00:16:44.387 [2024-11-07 10:45:11.978305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.387 [2024-11-07 10:45:11.978320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100318fd00 len:0x10000 key:0x184b00 00:16:44.387 [2024-11-07 10:45:11.978332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.387 [2024-11-07 10:45:11.978347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100317fc80 len:0x10000 key:0x184b00 00:16:44.387 [2024-11-07 10:45:11.978359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.387 [2024-11-07 10:45:11.978374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100316fc00 len:0x10000 key:0x184b00 00:16:44.387 [2024-11-07 10:45:11.978388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.387 [2024-11-07 10:45:11.978403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100315fb80 len:0x10000 key:0x184b00 00:16:44.387 [2024-11-07 10:45:11.978415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.387 [2024-11-07 10:45:11.978430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100314fb00 len:0x10000 key:0x184b00 00:16:44.387 [2024-11-07 10:45:11.978442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.387 [2024-11-07 10:45:11.978457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100313fa80 len:0x10000 key:0x184b00 00:16:44.387 [2024-11-07 10:45:11.978469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.387 [2024-11-07 10:45:11.978484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100312fa00 len:0x10000 key:0x184b00 00:16:44.387 [2024-11-07 10:45:11.978496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.387 [2024-11-07 10:45:11.978515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100311f980 len:0x10000 key:0x184b00 00:16:44.387 [2024-11-07 10:45:11.978527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.387 [2024-11-07 10:45:11.978542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100310f900 len:0x10000 key:0x184b00 00:16:44.387 [2024-11-07 10:45:11.978554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.387 [2024-11-07 10:45:11.978569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030ff880 len:0x10000 key:0x184b00 00:16:44.387 [2024-11-07 10:45:11.978581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.387 [2024-11-07 10:45:11.978596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030ef800 len:0x10000 key:0x184b00 00:16:44.387 [2024-11-07 10:45:11.978608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.387 [2024-11-07 10:45:11.978623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030df780 len:0x10000 key:0x184b00 00:16:44.387 [2024-11-07 10:45:11.978635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.387 [2024-11-07 10:45:11.978650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030cf700 len:0x10000 key:0x184b00 00:16:44.387 [2024-11-07 10:45:11.978662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.387 [2024-11-07 10:45:11.978677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030bf680 len:0x10000 key:0x184b00 00:16:44.387 [2024-11-07 10:45:11.978691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.387 [2024-11-07 10:45:11.978706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030af600 len:0x10000 key:0x184b00 00:16:44.387 [2024-11-07 10:45:11.978718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.387 [2024-11-07 10:45:11.978733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100309f580 len:0x10000 key:0x184b00 00:16:44.387 [2024-11-07 10:45:11.978745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.387 [2024-11-07 10:45:11.978760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100308f500 len:0x10000 key:0x184b00 00:16:44.387 [2024-11-07 10:45:11.978773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.387 [2024-11-07 10:45:11.978788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100307f480 len:0x10000 key:0x184b00 00:16:44.387 [2024-11-07 10:45:11.978800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.387 [2024-11-07 10:45:11.978815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100306f400 len:0x10000 key:0x184b00 00:16:44.387 [2024-11-07 10:45:11.978827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.387 [2024-11-07 10:45:11.978842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100305f380 len:0x10000 key:0x184b00 00:16:44.387 [2024-11-07 10:45:11.978854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.387 [2024-11-07 10:45:11.978869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100304f300 len:0x10000 key:0x184b00 00:16:44.387 [2024-11-07 10:45:11.978881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.387 [2024-11-07 10:45:11.978896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100303f280 len:0x10000 key:0x184b00 00:16:44.387 [2024-11-07 10:45:11.978909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.387 [2024-11-07 10:45:11.978923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100302f200 len:0x10000 key:0x184b00 00:16:44.387 [2024-11-07 10:45:11.978936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.387 [2024-11-07 10:45:11.978950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100301f180 len:0x10000 key:0x184b00 00:16:44.387 [2024-11-07 10:45:11.978963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.387 [2024-11-07 10:45:11.978978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100300f100 len:0x10000 key:0x184b00 00:16:44.387 [2024-11-07 10:45:11.978990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.387 [2024-11-07 10:45:11.979006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033f0000 len:0x10000 key:0x184800 00:16:44.387 [2024-11-07 10:45:11.979019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.387 [2024-11-07 10:45:11.979033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033dff80 len:0x10000 key:0x184800 00:16:44.387 [2024-11-07 10:45:11.979046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.387 [2024-11-07 10:45:11.979061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033cff00 len:0x10000 key:0x184800 00:16:44.387 [2024-11-07 10:45:11.979073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.387 [2024-11-07 10:45:11.979088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033bfe80 len:0x10000 key:0x184800 00:16:44.387 [2024-11-07 10:45:11.979100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.388 [2024-11-07 10:45:11.979115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033afe00 len:0x10000 key:0x184800 00:16:44.388 [2024-11-07 10:45:11.979127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.388 [2024-11-07 10:45:11.979142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100339fd80 len:0x10000 key:0x184800 00:16:44.388 [2024-11-07 10:45:11.979155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.388 [2024-11-07 10:45:11.979169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100338fd00 len:0x10000 key:0x184800 00:16:44.388 [2024-11-07 10:45:11.979182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.388 [2024-11-07 10:45:11.979197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100337fc80 len:0x10000 key:0x184800 00:16:44.388 [2024-11-07 10:45:11.979209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.388 [2024-11-07 10:45:11.979224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100336fc00 len:0x10000 key:0x184800 00:16:44.388 [2024-11-07 10:45:11.979236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.388 [2024-11-07 10:45:11.979251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100335fb80 len:0x10000 key:0x184800 00:16:44.388 [2024-11-07 10:45:11.979264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.388 [2024-11-07 10:45:11.979278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100334fb00 len:0x10000 key:0x184800 00:16:44.388 [2024-11-07 10:45:11.979290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.388 [2024-11-07 10:45:11.979307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100333fa80 len:0x10000 key:0x184800 00:16:44.388 [2024-11-07 10:45:11.979319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.388 [2024-11-07 10:45:11.979334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100332fa00 len:0x10000 key:0x184800 00:16:44.388 [2024-11-07 10:45:11.979346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.388 [2024-11-07 10:45:11.979361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100331f980 len:0x10000 key:0x184800 00:16:44.388 [2024-11-07 10:45:11.979373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.388 [2024-11-07 10:45:11.979388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100330f900 len:0x10000 key:0x184800 00:16:44.388 [2024-11-07 10:45:11.979401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.388 [2024-11-07 10:45:11.979415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032ff880 len:0x10000 key:0x184800 00:16:44.388 [2024-11-07 10:45:11.979428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.388 [2024-11-07 10:45:11.979442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032ef800 len:0x10000 key:0x184800 00:16:44.388 [2024-11-07 10:45:11.979455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.388 [2024-11-07 10:45:11.979469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032df780 len:0x10000 key:0x184800 00:16:44.388 [2024-11-07 10:45:11.979482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.388 [2024-11-07 10:45:11.979496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032cf700 len:0x10000 key:0x184800 00:16:44.388 [2024-11-07 10:45:11.979531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.388 [2024-11-07 10:45:11.979546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032bf680 len:0x10000 key:0x184800 00:16:44.388 [2024-11-07 10:45:11.979558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.388 [2024-11-07 10:45:11.979573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ecf700 len:0x10000 key:0x184200 00:16:44.388 [2024-11-07 10:45:11.979585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:27012000 sqhd:7250 p:0 m:0 dnr:0 00:16:44.388 [2024-11-07 10:45:11.982143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.388 [2024-11-07 10:45:11.982164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32766 cdw0:1cac280 sqhd:9a00 p:0 m:0 dnr:0 00:16:44.388 [2024-11-07 10:45:11.982177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.388 [2024-11-07 10:45:11.982191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32766 cdw0:1cac280 sqhd:9a00 p:0 m:0 dnr:0 00:16:44.388 [2024-11-07 10:45:11.982204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.388 [2024-11-07 10:45:11.982217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32766 cdw0:1cac280 sqhd:9a00 p:0 m:0 dnr:0 00:16:44.388 [2024-11-07 10:45:11.982229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.388 [2024-11-07 10:45:11.982241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32766 cdw0:1cac280 sqhd:9a00 p:0 m:0 dnr:0 00:16:44.388 [2024-11-07 10:45:11.984564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:44.388 [2024-11-07 10:45:11.984607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:16:44.388 [2024-11-07 10:45:11.984640] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:16:44.388 [2024-11-07 10:45:11.984689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.388 [2024-11-07 10:45:11.984721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32568 cdw0:1 sqhd:7990 p:0 m:0 dnr:0 00:16:44.388 [2024-11-07 10:45:11.984753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.388 [2024-11-07 10:45:11.984783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32568 cdw0:1 sqhd:7990 p:0 m:0 dnr:0 00:16:44.388 [2024-11-07 10:45:11.984815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.388 [2024-11-07 10:45:11.984845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32568 cdw0:1 sqhd:7990 p:0 m:0 dnr:0 00:16:44.388 [2024-11-07 10:45:11.984876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.388 [2024-11-07 10:45:11.984907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32568 cdw0:1 sqhd:7990 p:0 m:0 dnr:0 00:16:44.388 [2024-11-07 10:45:11.986998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:44.388 [2024-11-07 10:45:11.987038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:16:44.388 [2024-11-07 10:45:11.987069] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:16:44.388 [2024-11-07 10:45:11.987115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.388 [2024-11-07 10:45:11.987147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32568 cdw0:1 sqhd:7990 p:0 m:0 dnr:0 00:16:44.388 [2024-11-07 10:45:11.987180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.388 [2024-11-07 10:45:11.987209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32568 cdw0:1 sqhd:7990 p:0 m:0 dnr:0 00:16:44.388 [2024-11-07 10:45:11.987241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.388 [2024-11-07 10:45:11.987272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32568 cdw0:1 sqhd:7990 p:0 m:0 dnr:0 00:16:44.388 [2024-11-07 10:45:11.987310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.388 [2024-11-07 10:45:11.987341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32568 cdw0:1 sqhd:7990 p:0 m:0 dnr:0 00:16:44.388 [2024-11-07 10:45:11.989622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:44.388 [2024-11-07 10:45:11.989663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:16:44.388 [2024-11-07 10:45:11.989692] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:16:44.388 [2024-11-07 10:45:11.989737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.388 [2024-11-07 10:45:11.989769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32568 cdw0:1 sqhd:7990 p:0 m:0 dnr:0 00:16:44.388 [2024-11-07 10:45:11.989801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.388 [2024-11-07 10:45:11.989830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32568 cdw0:1 sqhd:7990 p:0 m:0 dnr:0 00:16:44.388 [2024-11-07 10:45:11.989862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.388 [2024-11-07 10:45:11.989893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32568 cdw0:1 sqhd:7990 p:0 m:0 dnr:0 00:16:44.388 [2024-11-07 10:45:11.989924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.388 [2024-11-07 10:45:11.989954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32568 cdw0:1 sqhd:7990 p:0 m:0 dnr:0 00:16:44.388 [2024-11-07 10:45:11.992226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:44.388 [2024-11-07 10:45:11.992243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:16:44.389 [2024-11-07 10:45:11.992255] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:16:44.389 [2024-11-07 10:45:11.992273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.389 [2024-11-07 10:45:11.992285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32568 cdw0:1 sqhd:7990 p:0 m:0 dnr:0 00:16:44.389 [2024-11-07 10:45:11.992298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.389 [2024-11-07 10:45:11.992311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32568 cdw0:1 sqhd:7990 p:0 m:0 dnr:0 00:16:44.389 [2024-11-07 10:45:11.992323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.389 [2024-11-07 10:45:11.992336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32568 cdw0:1 sqhd:7990 p:0 m:0 dnr:0 00:16:44.389 [2024-11-07 10:45:11.992348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.389 [2024-11-07 10:45:11.992360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32568 cdw0:1 sqhd:7990 p:0 m:0 dnr:0 00:16:44.389 [2024-11-07 10:45:11.994555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:44.389 [2024-11-07 10:45:11.994603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:16:44.389 [2024-11-07 10:45:11.994633] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:16:44.389 [2024-11-07 10:45:11.994681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.389 [2024-11-07 10:45:11.994713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32568 cdw0:1 sqhd:7990 p:0 m:0 dnr:0 00:16:44.389 [2024-11-07 10:45:11.994744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.389 [2024-11-07 10:45:11.994774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32568 cdw0:1 sqhd:7990 p:0 m:0 dnr:0 00:16:44.389 [2024-11-07 10:45:11.994806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.389 [2024-11-07 10:45:11.994836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32568 cdw0:1 sqhd:7990 p:0 m:0 dnr:0 00:16:44.389 [2024-11-07 10:45:11.994868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.389 [2024-11-07 10:45:11.994898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32568 cdw0:1 sqhd:7990 p:0 m:0 dnr:0 00:16:44.389 [2024-11-07 10:45:11.997054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:44.389 [2024-11-07 10:45:11.997094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:16:44.389 [2024-11-07 10:45:11.997123] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:16:44.389 [2024-11-07 10:45:11.997165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.389 [2024-11-07 10:45:11.997197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32568 cdw0:1 sqhd:7990 p:0 m:0 dnr:0 00:16:44.389 [2024-11-07 10:45:11.997229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.389 [2024-11-07 10:45:11.997259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32568 cdw0:1 sqhd:7990 p:0 m:0 dnr:0 00:16:44.389 [2024-11-07 10:45:11.997291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.389 [2024-11-07 10:45:11.997321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32568 cdw0:1 sqhd:7990 p:0 m:0 dnr:0 00:16:44.389 [2024-11-07 10:45:11.997366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.389 [2024-11-07 10:45:11.997378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32568 cdw0:1 sqhd:7990 p:0 m:0 dnr:0 00:16:44.389 [2024-11-07 10:45:11.999265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:44.389 [2024-11-07 10:45:11.999307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:16:44.389 [2024-11-07 10:45:11.999336] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:16:44.389 [2024-11-07 10:45:11.999382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.389 [2024-11-07 10:45:11.999426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32568 cdw0:1 sqhd:7990 p:0 m:0 dnr:0 00:16:44.389 [2024-11-07 10:45:11.999443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.389 [2024-11-07 10:45:11.999455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32568 cdw0:1 sqhd:7990 p:0 m:0 dnr:0 00:16:44.389 [2024-11-07 10:45:11.999468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.389 [2024-11-07 10:45:11.999480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32568 cdw0:1 sqhd:7990 p:0 m:0 dnr:0 00:16:44.389 [2024-11-07 10:45:11.999493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.389 [2024-11-07 10:45:11.999505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32568 cdw0:1 sqhd:7990 p:0 m:0 dnr:0 00:16:44.389 [2024-11-07 10:45:12.001716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:44.389 [2024-11-07 10:45:12.001757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:16:44.389 [2024-11-07 10:45:12.001787] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:16:44.389 [2024-11-07 10:45:12.001834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.389 [2024-11-07 10:45:12.001866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32568 cdw0:1 sqhd:7990 p:0 m:0 dnr:0 00:16:44.389 [2024-11-07 10:45:12.001899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.389 [2024-11-07 10:45:12.001929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32568 cdw0:1 sqhd:7990 p:0 m:0 dnr:0 00:16:44.389 [2024-11-07 10:45:12.001961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.389 [2024-11-07 10:45:12.001991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32568 cdw0:1 sqhd:7990 p:0 m:0 dnr:0 00:16:44.389 [2024-11-07 10:45:12.002022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.389 [2024-11-07 10:45:12.002052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32568 cdw0:1 sqhd:7990 p:0 m:0 dnr:0 00:16:44.389 [2024-11-07 10:45:12.004019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:44.389 [2024-11-07 10:45:12.004059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:16:44.389 [2024-11-07 10:45:12.004089] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:16:44.389 [2024-11-07 10:45:12.004130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.389 [2024-11-07 10:45:12.004163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32568 cdw0:1 sqhd:7990 p:0 m:0 dnr:0 00:16:44.389 [2024-11-07 10:45:12.004194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.389 [2024-11-07 10:45:12.004224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32568 cdw0:1 sqhd:7990 p:0 m:0 dnr:0 00:16:44.389 [2024-11-07 10:45:12.004255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.389 [2024-11-07 10:45:12.004292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32568 cdw0:1 sqhd:7990 p:0 m:0 dnr:0 00:16:44.389 [2024-11-07 10:45:12.004324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.389 [2024-11-07 10:45:12.004354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32568 cdw0:1 sqhd:7990 p:0 m:0 dnr:0 00:16:44.389 [2024-11-07 10:45:12.023213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:44.389 [2024-11-07 10:45:12.023265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:16:44.389 [2024-11-07 10:45:12.023298] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:16:44.649 [2024-11-07 10:45:12.031891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:16:44.649 [2024-11-07 10:45:12.031918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:16:44.649 [2024-11-07 10:45:12.031929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:16:44.649 [2024-11-07 10:45:12.031974] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:16:44.649 [2024-11-07 10:45:12.031988] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:16:44.649 [2024-11-07 10:45:12.032001] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:16:44.649 [2024-11-07 10:45:12.032014] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:16:44.649 [2024-11-07 10:45:12.032026] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:16:44.649 [2024-11-07 10:45:12.032041] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:16:44.649 [2024-11-07 10:45:12.032054] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:16:44.649 [2024-11-07 10:45:12.032140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:16:44.649 [2024-11-07 10:45:12.032153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:16:44.650 [2024-11-07 10:45:12.032163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:16:44.650 [2024-11-07 10:45:12.032176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:16:44.650 [2024-11-07 10:45:12.034325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:16:44.650 task offset: 38528 on job bdev=Nvme1n1 fails 00:16:44.650 00:16:44.650 Latency(us) 00:16:44.650 [2024-11-07T09:45:12.321Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:44.650 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:44.650 Job: Nvme1n1 ended in about 1.92 seconds with error 00:16:44.650 Verification LBA range: start 0x0 length 0x400 00:16:44.650 Nvme1n1 : 1.92 141.69 8.86 33.34 0.00 363345.78 5740.95 1080452.71 00:16:44.650 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:44.650 Job: Nvme2n1 ended in about 1.92 seconds with error 00:16:44.650 Verification LBA range: start 0x0 length 0x400 00:16:44.650 Nvme2n1 : 1.92 133.30 8.33 33.32 0.00 378538.07 7654.60 1187826.89 00:16:44.650 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:44.650 Job: Nvme3n1 ended in about 1.92 seconds with error 00:16:44.650 Verification LBA range: start 0x0 length 0x400 00:16:44.650 Nvme3n1 : 1.92 133.24 8.33 33.31 0.00 375756.88 15414.07 1181116.01 00:16:44.650 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:44.650 Job: Nvme4n1 ended in about 1.92 seconds with error 00:16:44.650 Verification LBA range: start 0x0 length 0x400 00:16:44.650 Nvme4n1 : 1.92 133.18 8.32 33.30 0.00 372927.04 24012.39 1174405.12 00:16:44.650 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:44.650 Job: Nvme5n1 ended in about 1.92 seconds with error 00:16:44.650 Verification LBA range: start 0x0 length 0x400 00:16:44.650 Nvme5n1 : 1.92 133.12 8.32 33.28 0.00 369974.31 32086.43 1167694.23 00:16:44.650 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:44.650 Job: Nvme6n1 ended in about 1.92 seconds with error 00:16:44.650 Verification LBA range: start 0x0 length 0x400 00:16:44.650 Nvme6n1 : 1.92 133.06 8.32 33.27 0.00 367136.28 37539.02 1154272.46 00:16:44.650 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:44.650 Job: Nvme7n1 ended in about 1.92 seconds with error 00:16:44.650 Verification LBA range: start 0x0 length 0x400 00:16:44.650 Nvme7n1 : 1.92 133.00 8.31 33.25 0.00 364351.98 44879.05 1147561.57 00:16:44.650 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:44.650 Job: Nvme8n1 ended in about 1.93 seconds with error 00:16:44.650 Verification LBA range: start 0x0 length 0x400 00:16:44.650 Nvme8n1 : 1.93 132.94 8.31 33.24 0.00 361610.94 52848.23 1140850.69 00:16:44.650 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:44.650 Job: Nvme9n1 ended in about 1.93 seconds with error 00:16:44.650 Verification LBA range: start 0x0 length 0x400 00:16:44.650 Nvme9n1 : 1.93 132.88 8.31 33.22 0.00 358563.84 42362.47 1134139.80 00:16:44.650 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:44.650 Job: Nvme10n1 ended in about 1.93 seconds with error 00:16:44.650 Verification LBA range: start 0x0 length 0x400 00:16:44.650 Nvme10n1 : 1.93 132.83 8.30 33.21 0.00 355501.34 67528.29 1120718.03 00:16:44.650 [2024-11-07T09:45:12.321Z] =================================================================================================================== 00:16:44.650 [2024-11-07T09:45:12.321Z] Total : 1339.26 83.70 332.73 0.00 366753.61 5740.95 1187826.89 00:16:44.650 [2024-11-07 10:45:12.059694] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:44.650 [2024-11-07 10:45:12.059720] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:16:44.650 [2024-11-07 10:45:12.059737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:16:44.650 [2024-11-07 10:45:12.071548] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:16:44.650 [2024-11-07 10:45:12.071609] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:16:44.650 [2024-11-07 10:45:12.071637] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168ed040 00:16:44.650 [2024-11-07 10:45:12.071752] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:16:44.650 [2024-11-07 10:45:12.071787] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:16:44.650 [2024-11-07 10:45:12.071812] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168e5300 00:16:44.650 [2024-11-07 10:45:12.071963] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:16:44.650 [2024-11-07 10:45:12.071997] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:16:44.650 [2024-11-07 10:45:12.072030] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168d9c80 00:16:44.650 [2024-11-07 10:45:12.076517] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:16:44.650 [2024-11-07 10:45:12.076542] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:16:44.650 [2024-11-07 10:45:12.076553] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168a8500 00:16:44.650 [2024-11-07 10:45:12.076634] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:16:44.650 [2024-11-07 10:45:12.076648] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:16:44.650 [2024-11-07 10:45:12.076658] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168c5040 00:16:44.650 [2024-11-07 10:45:12.076743] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:16:44.650 [2024-11-07 10:45:12.076757] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:16:44.650 [2024-11-07 10:45:12.076766] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168c6340 00:16:44.650 [2024-11-07 10:45:12.076852] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:16:44.650 [2024-11-07 10:45:12.076866] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:16:44.650 [2024-11-07 10:45:12.076876] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168d2900 00:16:44.650 [2024-11-07 10:45:12.077516] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:16:44.650 [2024-11-07 10:45:12.077533] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:16:44.650 [2024-11-07 10:45:12.077543] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168bf1c0 00:16:44.650 [2024-11-07 10:45:12.077625] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:16:44.650 [2024-11-07 10:45:12.077638] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:16:44.650 [2024-11-07 10:45:12.077648] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001688e080 00:16:44.650 [2024-11-07 10:45:12.077729] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:16:44.650 [2024-11-07 10:45:12.077743] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:16:44.650 [2024-11-07 10:45:12.077752] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001689b1c0 00:16:44.650 10:45:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3794296 00:16:44.650 10:45:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:16:44.650 10:45:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3794296 00:16:44.650 10:45:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:16:44.650 10:45:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:44.650 10:45:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:16:44.650 10:45:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:44.650 10:45:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 3794296 00:16:45.586 [2024-11-07 10:45:13.076130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:45.586 [2024-11-07 10:45:13.076190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:16:45.586 [2024-11-07 10:45:13.077913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:45.586 [2024-11-07 10:45:13.077956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:16:45.586 [2024-11-07 10:45:13.079163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:45.586 [2024-11-07 10:45:13.079205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:16:45.586 [2024-11-07 10:45:13.080860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:45.586 [2024-11-07 10:45:13.080902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:16:45.586 [2024-11-07 10:45:13.082219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:45.586 [2024-11-07 10:45:13.082258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:16:45.586 [2024-11-07 10:45:13.083766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:45.586 [2024-11-07 10:45:13.083806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:16:45.586 [2024-11-07 10:45:13.083834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:16:45.586 [2024-11-07 10:45:13.083871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:16:45.586 [2024-11-07 10:45:13.083884] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:16:45.586 [2024-11-07 10:45:13.083898] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:16:45.586 [2024-11-07 10:45:13.083914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:16:45.586 [2024-11-07 10:45:13.083925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:16:45.586 [2024-11-07 10:45:13.083936] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] already in failed state 00:16:45.587 [2024-11-07 10:45:13.083948] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:16:45.587 [2024-11-07 10:45:13.083961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:16:45.587 [2024-11-07 10:45:13.083972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:16:45.587 [2024-11-07 10:45:13.083982] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] already in failed state 00:16:45.587 [2024-11-07 10:45:13.083993] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:16:45.587 [2024-11-07 10:45:13.085530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:45.587 [2024-11-07 10:45:13.085572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:16:45.587 [2024-11-07 10:45:13.086844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:45.587 [2024-11-07 10:45:13.086899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:16:45.587 [2024-11-07 10:45:13.088367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:45.587 [2024-11-07 10:45:13.088407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:16:45.587 [2024-11-07 10:45:13.089912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:45.587 [2024-11-07 10:45:13.089955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:16:45.587 [2024-11-07 10:45:13.090114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:16:45.587 [2024-11-07 10:45:13.090148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:16:45.587 [2024-11-07 10:45:13.090178] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] already in failed state 00:16:45.587 [2024-11-07 10:45:13.090208] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:16:45.587 [2024-11-07 10:45:13.090245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:16:45.587 [2024-11-07 10:45:13.090274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:16:45.587 [2024-11-07 10:45:13.090302] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] already in failed state 00:16:45.587 [2024-11-07 10:45:13.090331] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:16:45.587 [2024-11-07 10:45:13.090367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:16:45.587 [2024-11-07 10:45:13.090395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:16:45.587 [2024-11-07 10:45:13.090424] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] already in failed state 00:16:45.587 [2024-11-07 10:45:13.090454] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:16:45.587 [2024-11-07 10:45:13.090489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:16:45.587 [2024-11-07 10:45:13.090582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:16:45.587 [2024-11-07 10:45:13.090611] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] already in failed state 00:16:45.587 [2024-11-07 10:45:13.090641] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:16:45.587 [2024-11-07 10:45:13.090679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:16:45.587 [2024-11-07 10:45:13.090708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:16:45.587 [2024-11-07 10:45:13.090737] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] already in failed state 00:16:45.587 [2024-11-07 10:45:13.090766] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:16:45.587 [2024-11-07 10:45:13.090802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:16:45.587 [2024-11-07 10:45:13.090831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:16:45.587 [2024-11-07 10:45:13.090859] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] already in failed state 00:16:45.587 [2024-11-07 10:45:13.090898] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:16:45.587 [2024-11-07 10:45:13.090934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:16:45.587 [2024-11-07 10:45:13.090962] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:16:45.587 [2024-11-07 10:45:13.090990] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] already in failed state 00:16:45.587 [2024-11-07 10:45:13.091020] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:16:45.587 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:16:45.587 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:45.587 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:16:45.587 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:16:45.587 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:16:45.587 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:45.587 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:16:45.587 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:16:45.587 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:16:45.848 rmmod nvme_rdma 00:16:45.848 rmmod nvme_fabrics 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3794199 ']' 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3794199 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 3794199 ']' 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 3794199 00:16:45.848 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3794199) - No such process 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@979 -- # echo 'Process with pid 3794199 is not found' 00:16:45.848 Process with pid 3794199 is not found 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:16:45.848 00:16:45.848 real 0m5.553s 00:16:45.848 user 0m15.815s 00:16:45.848 sys 0m1.337s 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:16:45.848 ************************************ 00:16:45.848 END TEST nvmf_shutdown_tc3 00:16:45.848 ************************************ 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ mlx5 == \e\8\1\0 ]] 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:16:45.848 ************************************ 00:16:45.848 START TEST nvmf_shutdown_tc4 00:16:45.848 ************************************ 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc4 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:45.848 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:16:45.849 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:16:45.849 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:16:45.849 Found net devices under 0000:d9:00.0: mlx_0_0 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:16:45.849 Found net devices under 0000:d9:00.1: mlx_0_1 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # rdma_device_init 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # uname 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@67 -- # modprobe ib_core 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:16:45.849 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:45.849 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:16:45.849 altname enp217s0f0np0 00:16:45.849 altname ens818f0np0 00:16:45.849 inet 192.168.100.8/24 scope global mlx_0_0 00:16:45.849 valid_lft forever preferred_lft forever 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:16:45.849 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:45.850 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:45.850 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:45.850 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:45.850 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:16:45.850 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:16:45.850 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:16:45.850 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:45.850 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:16:45.850 altname enp217s0f1np1 00:16:45.850 altname ens818f1np1 00:16:45.850 inet 192.168.100.9/24 scope global mlx_0_1 00:16:45.850 valid_lft forever preferred_lft forever 00:16:45.850 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:16:45.850 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:45.850 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:45.850 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:16:45.850 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:16:45.850 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:16:45.850 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:45.850 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:45.850 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:45.850 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:46.109 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:46.109 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:46.109 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:46.109 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:46.109 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:46.109 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:16:46.109 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:46.109 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:46.109 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:46.109 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:46.109 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:46.109 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:46.109 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:16:46.109 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:46.109 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:16:46.109 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:46.109 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:46.109 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:46.109 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:46.109 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:46.109 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:16:46.109 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:46.109 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:46.109 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:46.109 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:46.109 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:16:46.109 192.168.100.9' 00:16:46.109 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:16:46.109 192.168.100.9' 00:16:46.109 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # head -n 1 00:16:46.109 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:46.109 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:16:46.109 192.168.100.9' 00:16:46.110 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # tail -n +2 00:16:46.110 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # head -n 1 00:16:46.110 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:46.110 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:16:46.110 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:46.110 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:16:46.110 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:16:46.110 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:16:46.110 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:16:46.110 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:46.110 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:46.110 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:16:46.110 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3795186 00:16:46.110 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3795186 00:16:46.110 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@833 -- # '[' -z 3795186 ']' 00:16:46.110 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.110 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:46.110 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.110 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:46.110 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:16:46.110 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:46.110 [2024-11-07 10:45:13.643307] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:16:46.110 [2024-11-07 10:45:13.643358] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:46.110 [2024-11-07 10:45:13.719088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:46.110 [2024-11-07 10:45:13.758885] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:46.110 [2024-11-07 10:45:13.758926] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:46.110 [2024-11-07 10:45:13.758935] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:46.110 [2024-11-07 10:45:13.758944] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:46.110 [2024-11-07 10:45:13.758967] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:46.110 [2024-11-07 10:45:13.760614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:46.110 [2024-11-07 10:45:13.760702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:46.110 [2024-11-07 10:45:13.760809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.110 [2024-11-07 10:45:13.760810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:16:46.369 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:46.369 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@866 -- # return 0 00:16:46.369 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:46.369 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:46.369 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:16:46.369 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:46.369 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:46.369 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.369 10:45:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:16:46.369 [2024-11-07 10:45:13.926227] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1a320f0/0x1a365e0) succeed. 00:16:46.369 [2024-11-07 10:45:13.935421] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1a33780/0x1a77c80) succeed. 00:16:46.628 10:45:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.628 10:45:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:16:46.628 10:45:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:16:46.629 10:45:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:46.629 10:45:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:16:46.629 10:45:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:46.629 10:45:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:46.629 10:45:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:16:46.629 10:45:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:46.629 10:45:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:16:46.629 10:45:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:46.629 10:45:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:16:46.629 10:45:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:46.629 10:45:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:16:46.629 10:45:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:46.629 10:45:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:16:46.629 10:45:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:46.629 10:45:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:16:46.629 10:45:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:46.629 10:45:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:16:46.629 10:45:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:46.629 10:45:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:16:46.629 10:45:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:46.629 10:45:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:16:46.629 10:45:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:46.629 10:45:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:16:46.629 10:45:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:16:46.629 10:45:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.629 10:45:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:16:46.629 Malloc1 00:16:46.629 [2024-11-07 10:45:14.169038] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:46.629 Malloc2 00:16:46.629 Malloc3 00:16:46.629 Malloc4 00:16:46.888 Malloc5 00:16:46.888 Malloc6 00:16:46.888 Malloc7 00:16:46.888 Malloc8 00:16:46.888 Malloc9 00:16:46.888 Malloc10 00:16:47.175 10:45:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.175 10:45:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:16:47.175 10:45:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:47.175 10:45:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:16:47.175 10:45:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3795486 00:16:47.175 10:45:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' -P 4 00:16:47.175 10:45:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:16:47.175 [2024-11-07 10:45:14.696835] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:52.448 10:45:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:52.448 10:45:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3795186 00:16:52.448 10:45:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 3795186 ']' 00:16:52.448 10:45:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 3795186 00:16:52.448 10:45:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # uname 00:16:52.448 10:45:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:52.448 10:45:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3795186 00:16:52.448 10:45:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:16:52.448 10:45:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:16:52.448 10:45:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3795186' 00:16:52.448 killing process with pid 3795186 00:16:52.448 10:45:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@971 -- # kill 3795186 00:16:52.448 10:45:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@976 -- # wait 3795186 00:16:52.448 NVMe io qpair process completion error 00:16:52.448 NVMe io qpair process completion error 00:16:52.448 NVMe io qpair process completion error 00:16:52.448 NVMe io qpair process completion error 00:16:52.448 NVMe io qpair process completion error 00:16:52.707 10:45:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 starting I/O failed: -6 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 [2024-11-07 10:45:20.771544] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Submitting Keep Alive failed 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.277 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 starting I/O failed: -6 00:16:53.278 [2024-11-07 10:45:20.782527] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Submitting Keep Alive failed 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 [2024-11-07 10:45:20.794058] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.278 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 [2024-11-07 10:45:20.804958] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Submitting Keep Alive failed 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.279 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 Write completed with error (sct=0, sc=8) 00:16:53.280 [2024-11-07 10:45:20.815986] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Submitting Keep Alive failed 00:16:53.280 NVMe io qpair process completion error 00:16:53.280 NVMe io qpair process completion error 00:16:53.280 NVMe io qpair process completion error 00:16:53.280 NVMe io qpair process completion error 00:16:53.280 NVMe io qpair process completion error 00:16:53.848 10:45:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3795486 00:16:53.848 10:45:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:16:53.848 10:45:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3795486 00:16:53.848 10:45:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:16:53.848 10:45:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:53.848 10:45:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:16:53.848 10:45:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:53.848 10:45:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 3795486 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 [2024-11-07 10:45:21.819419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:54.418 [2024-11-07 10:45:21.819484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 [2024-11-07 10:45:21.822108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 [2024-11-07 10:45:21.822153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 [2024-11-07 10:45:21.832718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 [2024-11-07 10:45:21.832793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.418 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 [2024-11-07 10:45:21.845629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 [2024-11-07 10:45:21.845703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 [2024-11-07 10:45:21.848490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:54.419 [2024-11-07 10:45:21.848571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.419 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 [2024-11-07 10:45:21.851173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:54.420 [2024-11-07 10:45:21.851219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 [2024-11-07 10:45:21.858388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 [2024-11-07 10:45:21.858433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 [2024-11-07 10:45:21.870726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:54.420 [2024-11-07 10:45:21.870798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.420 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 [2024-11-07 10:45:21.873625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 [2024-11-07 10:45:21.873671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 Write completed with error (sct=0, sc=8) 00:16:54.421 [2024-11-07 10:45:21.910565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:54.421 [2024-11-07 10:45:21.910631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:16:54.421 Initializing NVMe Controllers 00:16:54.421 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode2 00:16:54.421 Controller IO queue size 128, less than required. 00:16:54.421 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:54.421 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode3 00:16:54.421 Controller IO queue size 128, less than required. 00:16:54.421 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:54.421 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode6 00:16:54.421 Controller IO queue size 128, less than required. 00:16:54.421 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:54.421 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode7 00:16:54.421 Controller IO queue size 128, less than required. 00:16:54.421 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:54.421 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:16:54.421 Controller IO queue size 128, less than required. 00:16:54.421 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:54.421 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode4 00:16:54.421 Controller IO queue size 128, less than required. 00:16:54.421 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:54.421 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode5 00:16:54.421 Controller IO queue size 128, less than required. 00:16:54.421 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:54.421 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode8 00:16:54.421 Controller IO queue size 128, less than required. 00:16:54.421 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:54.421 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode10 00:16:54.421 Controller IO queue size 128, less than required. 00:16:54.421 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:54.421 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode9 00:16:54.421 Controller IO queue size 128, less than required. 00:16:54.421 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:54.421 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:16:54.421 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:16:54.421 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:16:54.421 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:16:54.421 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:54.421 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:16:54.421 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:16:54.421 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:16:54.421 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:16:54.421 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:16:54.421 Initialization complete. Launching workers. 00:16:54.421 ======================================================== 00:16:54.421 Latency(us) 00:16:54.421 Device Information : IOPS MiB/s Average min max 00:16:54.421 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1545.37 66.40 82104.91 116.79 1197210.52 00:16:54.421 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1551.42 66.66 81849.32 111.31 1196954.36 00:16:54.421 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1561.34 67.09 95666.16 102.70 2215828.31 00:16:54.421 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1576.30 67.73 94851.14 111.30 2208219.22 00:16:54.421 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1539.83 66.16 82454.87 113.46 1226667.55 00:16:54.421 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1548.73 66.55 82076.58 116.96 1210790.74 00:16:54.421 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1545.88 66.42 96736.57 113.58 2252130.63 00:16:54.421 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1571.93 67.54 95239.99 113.09 2196258.97 00:16:54.421 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1527.22 65.62 83215.99 115.50 1241591.09 00:16:54.421 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1592.60 68.43 94058.46 114.60 2069023.15 00:16:54.421 ======================================================== 00:16:54.421 Total : 15560.62 668.62 88876.12 102.70 2252130.63 00:16:54.421 00:16:54.421 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:16:54.421 10:45:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:16:54.421 10:45:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:54.421 10:45:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:54.421 10:45:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:54.421 10:45:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:16:54.421 10:45:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:16:54.422 10:45:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:54.422 10:45:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:54.422 10:45:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:16:54.422 10:45:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:54.422 10:45:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:16:54.422 10:45:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:16:54.422 10:45:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:16:54.422 10:45:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:16:54.422 10:45:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:54.422 10:45:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:16:54.422 rmmod nvme_rdma 00:16:54.422 rmmod nvme_fabrics 00:16:54.422 10:45:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:54.422 10:45:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:16:54.422 10:45:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:16:54.422 10:45:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3795186 ']' 00:16:54.422 10:45:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3795186 00:16:54.422 10:45:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 3795186 ']' 00:16:54.422 10:45:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 3795186 00:16:54.422 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3795186) - No such process 00:16:54.422 10:45:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@979 -- # echo 'Process with pid 3795186 is not found' 00:16:54.422 Process with pid 3795186 is not found 00:16:54.422 10:45:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:54.422 10:45:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:16:54.422 00:16:54.422 real 0m8.632s 00:16:54.422 user 0m32.112s 00:16:54.422 sys 0m1.307s 00:16:54.422 10:45:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:54.422 10:45:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:16:54.422 ************************************ 00:16:54.422 END TEST nvmf_shutdown_tc4 00:16:54.422 ************************************ 00:16:54.422 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:16:54.422 00:16:54.422 real 0m32.468s 00:16:54.422 user 1m36.221s 00:16:54.422 sys 0m10.327s 00:16:54.422 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:54.422 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:16:54.422 ************************************ 00:16:54.422 END TEST nvmf_shutdown 00:16:54.422 ************************************ 00:16:54.680 10:45:22 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=rdma 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:54.681 ************************************ 00:16:54.681 START TEST nvmf_nsid 00:16:54.681 ************************************ 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=rdma 00:16:54.681 * Looking for test storage... 00:16:54.681 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lcov --version 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:54.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:54.681 --rc genhtml_branch_coverage=1 00:16:54.681 --rc genhtml_function_coverage=1 00:16:54.681 --rc genhtml_legend=1 00:16:54.681 --rc geninfo_all_blocks=1 00:16:54.681 --rc geninfo_unexecuted_blocks=1 00:16:54.681 00:16:54.681 ' 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:54.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:54.681 --rc genhtml_branch_coverage=1 00:16:54.681 --rc genhtml_function_coverage=1 00:16:54.681 --rc genhtml_legend=1 00:16:54.681 --rc geninfo_all_blocks=1 00:16:54.681 --rc geninfo_unexecuted_blocks=1 00:16:54.681 00:16:54.681 ' 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:54.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:54.681 --rc genhtml_branch_coverage=1 00:16:54.681 --rc genhtml_function_coverage=1 00:16:54.681 --rc genhtml_legend=1 00:16:54.681 --rc geninfo_all_blocks=1 00:16:54.681 --rc geninfo_unexecuted_blocks=1 00:16:54.681 00:16:54.681 ' 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:54.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:54.681 --rc genhtml_branch_coverage=1 00:16:54.681 --rc genhtml_function_coverage=1 00:16:54.681 --rc genhtml_legend=1 00:16:54.681 --rc geninfo_all_blocks=1 00:16:54.681 --rc geninfo_unexecuted_blocks=1 00:16:54.681 00:16:54.681 ' 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:54.681 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:54.682 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:54.682 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:54.682 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:54.682 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:54.682 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:54.682 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:54.682 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:54.682 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:16:54.682 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:16:54.682 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:16:54.682 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:16:54.682 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:16:54.682 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:16:54.682 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:16:54.682 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:54.682 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:54.682 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:54.682 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:54.682 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:54.682 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:54.682 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:54.682 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:54.682 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:54.682 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:16:54.682 10:45:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:01.251 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:01.251 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:01.251 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:01.251 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:01.251 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:01.251 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:01.251 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:01.251 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:17:01.251 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:01.251 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:17:01.251 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:17:01.251 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:17:01.251 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:17:01.251 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:17:01.251 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:01.251 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:01.251 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:01.251 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:01.251 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:01.251 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:01.252 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:01.252 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:01.252 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:01.252 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@448 -- # rdma_device_init 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@62 -- # uname 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@530 -- # allocate_nic_ips 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:01.252 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:17:01.252 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:01.252 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:01.252 altname enp217s0f0np0 00:17:01.252 altname ens818f0np0 00:17:01.252 inet 192.168.100.8/24 scope global mlx_0_0 00:17:01.252 valid_lft forever preferred_lft forever 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:17:01.253 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:01.253 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:01.253 altname enp217s0f1np1 00:17:01.253 altname ens818f1np1 00:17:01.253 inet 192.168.100.9/24 scope global mlx_0_1 00:17:01.253 valid_lft forever preferred_lft forever 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:17:01.253 192.168.100.9' 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:17:01.253 192.168.100.9' 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # head -n 1 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:17:01.253 192.168.100.9' 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # tail -n +2 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # head -n 1 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:17:01.253 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:17:01.512 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:17:01.512 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:01.512 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:01.512 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:01.512 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:17:01.512 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=3799934 00:17:01.512 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 3799934 00:17:01.512 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 3799934 ']' 00:17:01.512 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.512 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:01.512 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.512 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:01.512 10:45:28 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:01.512 [2024-11-07 10:45:28.984185] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:17:01.512 [2024-11-07 10:45:28.984237] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:01.512 [2024-11-07 10:45:29.058439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.512 [2024-11-07 10:45:29.096803] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:01.513 [2024-11-07 10:45:29.096840] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:01.513 [2024-11-07 10:45:29.096849] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:01.513 [2024-11-07 10:45:29.096857] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:01.513 [2024-11-07 10:45:29.096880] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:01.513 [2024-11-07 10:45:29.097488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.773 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:01.773 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:17:01.773 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:01.773 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:01.773 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:01.773 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:01.773 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:01.773 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3799953 00:17:01.773 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:17:01.773 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=192.168.100.8 00:17:01.773 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:17:01.773 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:17:01.773 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:01.773 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:01.773 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:01.773 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:01.773 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:17:01.773 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:17:01.773 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:17:01.773 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:17:01.773 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:17:01.773 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=192.168.100.8 00:17:01.773 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:17:01.773 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=cab147fd-f5ff-4e09-9614-b95f98b8fd4f 00:17:01.773 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:17:01.773 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=da3379e9-c833-4eb5-8e3b-a2c02045ee42 00:17:01.773 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:17:01.773 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=e40328bd-9e61-45af-bf88-430e9cf4ed89 00:17:01.773 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:17:01.773 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.773 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:01.773 null0 00:17:01.773 [2024-11-07 10:45:29.291774] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:17:01.773 [2024-11-07 10:45:29.291819] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3799953 ] 00:17:01.773 null1 00:17:01.773 null2 00:17:01.773 [2024-11-07 10:45:29.328971] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xbc5770/0xbd5ee0) succeed. 00:17:01.773 [2024-11-07 10:45:29.338290] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xbc6c20/0xc55f40) succeed. 00:17:01.773 [2024-11-07 10:45:29.366656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.773 [2024-11-07 10:45:29.388215] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:01.774 [2024-11-07 10:45:29.407305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:01.774 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.774 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3799953 /var/tmp/tgt2.sock 00:17:01.774 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 3799953 ']' 00:17:01.774 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/tgt2.sock 00:17:01.774 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:01.774 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:17:01.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:17:01.774 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:01.774 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:02.033 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:02.033 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:17:02.033 10:45:29 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:17:02.291 [2024-11-07 10:45:29.957990] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb18120/0xa87bb0) succeed. 00:17:02.549 [2024-11-07 10:45:29.968770] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb193b0/0xac9250) succeed. 00:17:02.549 [2024-11-07 10:45:30.011684] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:17:02.549 nvme0n1 nvme0n2 00:17:02.549 nvme1n1 00:17:02.549 10:45:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:17:02.549 10:45:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:17:02.550 10:45:30 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t rdma -a 192.168.100.8 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e 00:17:10.667 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:17:10.667 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:17:10.667 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:17:10.667 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:17:10.667 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 00:17:10.667 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:17:10.667 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:17:10.667 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:17:10.667 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:17:10.667 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:17:10.667 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:17:10.667 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:17:10.667 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:17:10.667 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid cab147fd-f5ff-4e09-9614-b95f98b8fd4f 00:17:10.667 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:17:10.667 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:17:10.667 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:17:10.667 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:17:10.667 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:17:10.667 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=cab147fdf5ff4e099614b95f98b8fd4f 00:17:10.667 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo CAB147FDF5FF4E099614B95F98B8FD4F 00:17:10.667 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ CAB147FDF5FF4E099614B95F98B8FD4F == \C\A\B\1\4\7\F\D\F\5\F\F\4\E\0\9\9\6\1\4\B\9\5\F\9\8\B\8\F\D\4\F ]] 00:17:10.667 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:17:10.667 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:17:10.667 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:17:10.667 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n2 00:17:10.667 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:17:10.667 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n2 00:17:10.667 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:17:10.667 10:45:36 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid da3379e9-c833-4eb5-8e3b-a2c02045ee42 00:17:10.667 10:45:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:17:10.667 10:45:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:17:10.667 10:45:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:17:10.667 10:45:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:17:10.667 10:45:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:17:10.667 10:45:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=da3379e9c8334eb58e3ba2c02045ee42 00:17:10.667 10:45:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo DA3379E9C8334EB58E3BA2C02045EE42 00:17:10.667 10:45:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ DA3379E9C8334EB58E3BA2C02045EE42 == \D\A\3\3\7\9\E\9\C\8\3\3\4\E\B\5\8\E\3\B\A\2\C\0\2\0\4\5\E\E\4\2 ]] 00:17:10.667 10:45:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:17:10.667 10:45:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:17:10.667 10:45:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:17:10.667 10:45:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n3 00:17:10.667 10:45:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:17:10.667 10:45:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n3 00:17:10.667 10:45:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:17:10.667 10:45:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid e40328bd-9e61-45af-bf88-430e9cf4ed89 00:17:10.667 10:45:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:17:10.667 10:45:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:17:10.667 10:45:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:17:10.667 10:45:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:17:10.667 10:45:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:17:10.667 10:45:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=e40328bd9e6145afbf88430e9cf4ed89 00:17:10.667 10:45:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo E40328BD9E6145AFBF88430E9CF4ED89 00:17:10.667 10:45:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ E40328BD9E6145AFBF88430E9CF4ED89 == \E\4\0\3\2\8\B\D\9\E\6\1\4\5\A\F\B\F\8\8\4\3\0\E\9\C\F\4\E\D\8\9 ]] 00:17:10.667 10:45:37 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:17:17.230 10:45:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:17:17.230 10:45:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:17:17.230 10:45:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3799953 00:17:17.230 10:45:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 3799953 ']' 00:17:17.231 10:45:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 3799953 00:17:17.231 10:45:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:17:17.231 10:45:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:17.231 10:45:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3799953 00:17:17.231 10:45:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:17.231 10:45:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:17.231 10:45:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3799953' 00:17:17.231 killing process with pid 3799953 00:17:17.231 10:45:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 3799953 00:17:17.231 10:45:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 3799953 00:17:17.231 10:45:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:17:17.231 10:45:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:17.231 10:45:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:17:17.231 10:45:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:17.231 10:45:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:17.231 10:45:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:17:17.231 10:45:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:17.231 10:45:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:17.231 rmmod nvme_rdma 00:17:17.231 rmmod nvme_fabrics 00:17:17.231 10:45:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:17.231 10:45:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:17:17.231 10:45:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:17:17.231 10:45:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 3799934 ']' 00:17:17.231 10:45:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 3799934 00:17:17.231 10:45:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 3799934 ']' 00:17:17.231 10:45:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 3799934 00:17:17.231 10:45:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:17:17.231 10:45:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:17.231 10:45:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3799934 00:17:17.231 10:45:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:17.231 10:45:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:17.231 10:45:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3799934' 00:17:17.231 killing process with pid 3799934 00:17:17.231 10:45:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 3799934 00:17:17.231 10:45:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 3799934 00:17:17.231 10:45:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:17.231 10:45:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:17:17.231 00:17:17.231 real 0m22.804s 00:17:17.231 user 0m32.971s 00:17:17.231 sys 0m6.278s 00:17:17.489 10:45:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:17.489 10:45:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:17.489 ************************************ 00:17:17.489 END TEST nvmf_nsid 00:17:17.489 ************************************ 00:17:17.489 10:45:44 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:17:17.489 00:17:17.489 real 7m44.450s 00:17:17.489 user 18m10.473s 00:17:17.489 sys 2m16.744s 00:17:17.489 10:45:44 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:17.489 10:45:44 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:17.489 ************************************ 00:17:17.489 END TEST nvmf_target_extra 00:17:17.489 ************************************ 00:17:17.489 10:45:44 nvmf_rdma -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:17:17.489 10:45:44 nvmf_rdma -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:17.489 10:45:44 nvmf_rdma -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:17.489 10:45:44 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:17:17.489 ************************************ 00:17:17.490 START TEST nvmf_host 00:17:17.490 ************************************ 00:17:17.490 10:45:44 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:17:17.490 * Looking for test storage... 00:17:17.490 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:17:17.490 10:45:45 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:17.490 10:45:45 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:17:17.490 10:45:45 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:17.748 10:45:45 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:17.748 10:45:45 nvmf_rdma.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:17.748 10:45:45 nvmf_rdma.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:17.748 10:45:45 nvmf_rdma.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:17.748 10:45:45 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:17.748 10:45:45 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:17.748 10:45:45 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:17.748 10:45:45 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:17.748 10:45:45 nvmf_rdma.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:17.748 10:45:45 nvmf_rdma.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:17.748 10:45:45 nvmf_rdma.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:17.748 10:45:45 nvmf_rdma.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:17.748 10:45:45 nvmf_rdma.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:17:17.748 10:45:45 nvmf_rdma.nvmf_host -- scripts/common.sh@345 -- # : 1 00:17:17.748 10:45:45 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:17.748 10:45:45 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:17.748 10:45:45 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:17:17.748 10:45:45 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:17:17.748 10:45:45 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:17.748 10:45:45 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:17:17.748 10:45:45 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:17.748 10:45:45 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:17:17.748 10:45:45 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:17:17.748 10:45:45 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:17.748 10:45:45 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:17:17.748 10:45:45 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:17.748 10:45:45 nvmf_rdma.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:17.748 10:45:45 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:17.748 10:45:45 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # return 0 00:17:17.748 10:45:45 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:17.748 10:45:45 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:17.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.748 --rc genhtml_branch_coverage=1 00:17:17.748 --rc genhtml_function_coverage=1 00:17:17.748 --rc genhtml_legend=1 00:17:17.748 --rc geninfo_all_blocks=1 00:17:17.748 --rc geninfo_unexecuted_blocks=1 00:17:17.748 00:17:17.748 ' 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:17.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.749 --rc genhtml_branch_coverage=1 00:17:17.749 --rc genhtml_function_coverage=1 00:17:17.749 --rc genhtml_legend=1 00:17:17.749 --rc geninfo_all_blocks=1 00:17:17.749 --rc geninfo_unexecuted_blocks=1 00:17:17.749 00:17:17.749 ' 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:17.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.749 --rc genhtml_branch_coverage=1 00:17:17.749 --rc genhtml_function_coverage=1 00:17:17.749 --rc genhtml_legend=1 00:17:17.749 --rc geninfo_all_blocks=1 00:17:17.749 --rc geninfo_unexecuted_blocks=1 00:17:17.749 00:17:17.749 ' 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:17.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.749 --rc genhtml_branch_coverage=1 00:17:17.749 --rc genhtml_function_coverage=1 00:17:17.749 --rc genhtml_legend=1 00:17:17.749 --rc geninfo_all_blocks=1 00:17:17.749 --rc geninfo_unexecuted_blocks=1 00:17:17.749 00:17:17.749 ' 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host -- paths/export.sh@5 -- # export PATH 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:17.749 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.749 ************************************ 00:17:17.749 START TEST nvmf_multicontroller 00:17:17.749 ************************************ 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:17:17.749 * Looking for test storage... 00:17:17.749 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:17:17.749 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:18.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.009 --rc genhtml_branch_coverage=1 00:17:18.009 --rc genhtml_function_coverage=1 00:17:18.009 --rc genhtml_legend=1 00:17:18.009 --rc geninfo_all_blocks=1 00:17:18.009 --rc geninfo_unexecuted_blocks=1 00:17:18.009 00:17:18.009 ' 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:18.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.009 --rc genhtml_branch_coverage=1 00:17:18.009 --rc genhtml_function_coverage=1 00:17:18.009 --rc genhtml_legend=1 00:17:18.009 --rc geninfo_all_blocks=1 00:17:18.009 --rc geninfo_unexecuted_blocks=1 00:17:18.009 00:17:18.009 ' 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:18.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.009 --rc genhtml_branch_coverage=1 00:17:18.009 --rc genhtml_function_coverage=1 00:17:18.009 --rc genhtml_legend=1 00:17:18.009 --rc geninfo_all_blocks=1 00:17:18.009 --rc geninfo_unexecuted_blocks=1 00:17:18.009 00:17:18.009 ' 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:18.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.009 --rc genhtml_branch_coverage=1 00:17:18.009 --rc genhtml_function_coverage=1 00:17:18.009 --rc genhtml_legend=1 00:17:18.009 --rc geninfo_all_blocks=1 00:17:18.009 --rc geninfo_unexecuted_blocks=1 00:17:18.009 00:17:18.009 ' 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:18.009 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:17:18.009 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@20 -- # exit 0 00:17:18.009 00:17:18.009 real 0m0.209s 00:17:18.009 user 0m0.110s 00:17:18.009 sys 0m0.114s 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:18.009 ************************************ 00:17:18.009 END TEST nvmf_multicontroller 00:17:18.009 ************************************ 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:17:18.009 10:45:45 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:18.010 10:45:45 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:18.010 10:45:45 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.010 ************************************ 00:17:18.010 START TEST nvmf_aer 00:17:18.010 ************************************ 00:17:18.010 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:17:18.010 * Looking for test storage... 00:17:18.010 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:17:18.010 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:18.010 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:17:18.010 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:18.010 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:18.010 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:18.010 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:18.010 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:18.010 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:17:18.010 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:17:18.010 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:17:18.010 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:17:18.010 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:17:18.010 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:17:18.010 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:17:18.010 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:18.010 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:17:18.010 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:17:18.010 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:18.010 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:18.010 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:17:18.010 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:17:18.010 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:18.010 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:17:18.010 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:17:18.010 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:17:18.010 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:17:18.010 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:18.010 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:17:18.010 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:17:18.010 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:18.010 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:18.010 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:17:18.010 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:18.010 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:18.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.010 --rc genhtml_branch_coverage=1 00:17:18.010 --rc genhtml_function_coverage=1 00:17:18.010 --rc genhtml_legend=1 00:17:18.010 --rc geninfo_all_blocks=1 00:17:18.010 --rc geninfo_unexecuted_blocks=1 00:17:18.010 00:17:18.010 ' 00:17:18.010 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:18.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.010 --rc genhtml_branch_coverage=1 00:17:18.010 --rc genhtml_function_coverage=1 00:17:18.010 --rc genhtml_legend=1 00:17:18.010 --rc geninfo_all_blocks=1 00:17:18.010 --rc geninfo_unexecuted_blocks=1 00:17:18.010 00:17:18.010 ' 00:17:18.010 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:18.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.010 --rc genhtml_branch_coverage=1 00:17:18.010 --rc genhtml_function_coverage=1 00:17:18.010 --rc genhtml_legend=1 00:17:18.010 --rc geninfo_all_blocks=1 00:17:18.010 --rc geninfo_unexecuted_blocks=1 00:17:18.010 00:17:18.010 ' 00:17:18.010 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:18.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.010 --rc genhtml_branch_coverage=1 00:17:18.010 --rc genhtml_function_coverage=1 00:17:18.010 --rc genhtml_legend=1 00:17:18.010 --rc geninfo_all_blocks=1 00:17:18.010 --rc geninfo_unexecuted_blocks=1 00:17:18.010 00:17:18.010 ' 00:17:18.010 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:18.010 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:17:18.269 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:18.269 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:18.269 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:18.269 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:18.269 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:18.269 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:18.269 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:18.269 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:18.269 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:18.269 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:18.269 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:18.269 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:18.269 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:18.269 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:18.269 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:18.269 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:18.269 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:18.269 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:17:18.269 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:18.269 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:18.269 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:18.269 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.269 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.269 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.269 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:17:18.269 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.269 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:17:18.269 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:18.269 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:18.269 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:18.269 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:18.269 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:18.269 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:18.269 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:18.269 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:18.269 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:18.269 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:18.269 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:17:18.269 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:17:18.269 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:18.269 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:18.269 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:18.269 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:18.269 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:18.269 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:18.269 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:18.269 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:18.269 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:18.269 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:17:18.269 10:45:45 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:24.986 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:24.986 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:17:24.986 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:24.987 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:24.987 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # rdma_device_init 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # uname 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@530 -- # allocate_nic_ips 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:17:24.987 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:24.987 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:24.987 altname enp217s0f0np0 00:17:24.987 altname ens818f0np0 00:17:24.987 inet 192.168.100.8/24 scope global mlx_0_0 00:17:24.987 valid_lft forever preferred_lft forever 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:17:24.987 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:24.987 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:24.987 altname enp217s0f1np1 00:17:24.987 altname ens818f1np1 00:17:24.987 inet 192.168.100.9/24 scope global mlx_0_1 00:17:24.987 valid_lft forever preferred_lft forever 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:17:24.987 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:24.988 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:24.988 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:24.988 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:24.988 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:17:24.988 192.168.100.9' 00:17:24.988 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:17:24.988 192.168.100.9' 00:17:24.988 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # head -n 1 00:17:24.988 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:24.988 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:17:24.988 192.168.100.9' 00:17:24.988 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # tail -n +2 00:17:24.988 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # head -n 1 00:17:24.988 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:24.988 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:17:24.988 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:24.988 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:17:24.988 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:17:24.988 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:17:24.988 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:17:24.988 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:24.988 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:24.988 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:24.988 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3806081 00:17:24.988 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3806081 00:17:24.988 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # '[' -z 3806081 ']' 00:17:24.988 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.988 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:24.988 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.988 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:24.988 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:24.988 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:24.988 [2024-11-07 10:45:52.643495] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:17:24.988 [2024-11-07 10:45:52.643567] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:25.247 [2024-11-07 10:45:52.721654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:25.247 [2024-11-07 10:45:52.763194] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:25.247 [2024-11-07 10:45:52.763234] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:25.247 [2024-11-07 10:45:52.763244] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:25.247 [2024-11-07 10:45:52.763251] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:25.247 [2024-11-07 10:45:52.763275] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:25.247 [2024-11-07 10:45:52.764866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:25.247 [2024-11-07 10:45:52.764964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:25.247 [2024-11-07 10:45:52.765057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:25.247 [2024-11-07 10:45:52.765059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.247 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:25.247 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@866 -- # return 0 00:17:25.247 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:25.247 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:25.247 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:25.247 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:25.247 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:25.247 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.247 10:45:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:25.507 [2024-11-07 10:45:52.946409] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2392df0/0x23972e0) succeed. 00:17:25.507 [2024-11-07 10:45:52.955775] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2394480/0x23d8980) succeed. 00:17:25.507 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.507 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:17:25.507 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.507 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:25.507 Malloc0 00:17:25.507 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.507 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:17:25.507 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.507 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:25.507 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.507 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:25.507 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.507 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:25.507 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.507 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:25.507 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.507 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:25.507 [2024-11-07 10:45:53.141836] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:25.507 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.507 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:17:25.507 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.507 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:25.507 [ 00:17:25.507 { 00:17:25.507 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:25.507 "subtype": "Discovery", 00:17:25.507 "listen_addresses": [], 00:17:25.507 "allow_any_host": true, 00:17:25.507 "hosts": [] 00:17:25.507 }, 00:17:25.507 { 00:17:25.507 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:25.507 "subtype": "NVMe", 00:17:25.507 "listen_addresses": [ 00:17:25.507 { 00:17:25.507 "trtype": "RDMA", 00:17:25.507 "adrfam": "IPv4", 00:17:25.507 "traddr": "192.168.100.8", 00:17:25.507 "trsvcid": "4420" 00:17:25.507 } 00:17:25.507 ], 00:17:25.507 "allow_any_host": true, 00:17:25.507 "hosts": [], 00:17:25.507 "serial_number": "SPDK00000000000001", 00:17:25.507 "model_number": "SPDK bdev Controller", 00:17:25.507 "max_namespaces": 2, 00:17:25.507 "min_cntlid": 1, 00:17:25.507 "max_cntlid": 65519, 00:17:25.507 "namespaces": [ 00:17:25.507 { 00:17:25.507 "nsid": 1, 00:17:25.507 "bdev_name": "Malloc0", 00:17:25.507 "name": "Malloc0", 00:17:25.507 "nguid": "B3AE74A13B1040F59ADA678FBFCE528B", 00:17:25.507 "uuid": "b3ae74a1-3b10-40f5-9ada-678fbfce528b" 00:17:25.507 } 00:17:25.507 ] 00:17:25.507 } 00:17:25.507 ] 00:17:25.507 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.507 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:25.507 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:17:25.507 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3806156 00:17:25.507 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:17:25.507 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:17:25.507 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # local i=0 00:17:25.507 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:25.507 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 0 -lt 200 ']' 00:17:25.507 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=1 00:17:25.507 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:17:25.766 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:25.766 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 1 -lt 200 ']' 00:17:25.766 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=2 00:17:25.766 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:17:25.766 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:25.766 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:25.766 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1278 -- # return 0 00:17:25.766 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:17:25.766 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.766 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:25.766 Malloc1 00:17:25.766 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.766 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:17:25.766 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.766 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:26.026 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.026 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:17:26.026 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.026 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:26.026 [ 00:17:26.026 { 00:17:26.026 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:26.026 "subtype": "Discovery", 00:17:26.026 "listen_addresses": [], 00:17:26.026 "allow_any_host": true, 00:17:26.026 "hosts": [] 00:17:26.026 }, 00:17:26.026 { 00:17:26.026 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:26.026 "subtype": "NVMe", 00:17:26.026 "listen_addresses": [ 00:17:26.026 { 00:17:26.026 "trtype": "RDMA", 00:17:26.026 "adrfam": "IPv4", 00:17:26.026 "traddr": "192.168.100.8", 00:17:26.026 "trsvcid": "4420" 00:17:26.026 } 00:17:26.026 ], 00:17:26.026 "allow_any_host": true, 00:17:26.026 "hosts": [], 00:17:26.026 "serial_number": "SPDK00000000000001", 00:17:26.026 "model_number": "SPDK bdev Controller", 00:17:26.026 "max_namespaces": 2, 00:17:26.026 "min_cntlid": 1, 00:17:26.026 "max_cntlid": 65519, 00:17:26.026 "namespaces": [ 00:17:26.026 { 00:17:26.026 "nsid": 1, 00:17:26.026 "bdev_name": "Malloc0", 00:17:26.026 "name": "Malloc0", 00:17:26.026 "nguid": "B3AE74A13B1040F59ADA678FBFCE528B", 00:17:26.026 "uuid": "b3ae74a1-3b10-40f5-9ada-678fbfce528b" 00:17:26.026 }, 00:17:26.026 { 00:17:26.026 "nsid": 2, 00:17:26.026 "bdev_name": "Malloc1", 00:17:26.026 "name": "Malloc1", 00:17:26.026 "nguid": "7A4C8525539147D2ABD4191F49EAE4D4", 00:17:26.026 "uuid": "7a4c8525-5391-47d2-abd4-191f49eae4d4" 00:17:26.026 } 00:17:26.026 ] 00:17:26.026 } 00:17:26.026 ] 00:17:26.026 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.026 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3806156 00:17:26.026 Asynchronous Event Request test 00:17:26.026 Attaching to 192.168.100.8 00:17:26.026 Attached to 192.168.100.8 00:17:26.026 Registering asynchronous event callbacks... 00:17:26.026 Starting namespace attribute notice tests for all controllers... 00:17:26.026 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:26.026 aer_cb - Changed Namespace 00:17:26.026 Cleaning up... 00:17:26.026 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:26.026 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.026 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:26.026 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.026 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:17:26.026 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.026 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:26.026 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.026 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:26.026 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.026 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:26.026 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.026 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:17:26.026 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:17:26.026 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:26.026 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:17:26.026 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:26.026 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:26.026 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:17:26.026 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:26.026 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:26.026 rmmod nvme_rdma 00:17:26.026 rmmod nvme_fabrics 00:17:26.026 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:26.026 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:17:26.026 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:17:26.026 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3806081 ']' 00:17:26.026 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3806081 00:17:26.026 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # '[' -z 3806081 ']' 00:17:26.026 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # kill -0 3806081 00:17:26.026 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # uname 00:17:26.026 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:26.026 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3806081 00:17:26.026 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:26.026 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:26.026 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3806081' 00:17:26.026 killing process with pid 3806081 00:17:26.026 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@971 -- # kill 3806081 00:17:26.027 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@976 -- # wait 3806081 00:17:26.286 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:26.286 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:17:26.286 00:17:26.286 real 0m8.427s 00:17:26.286 user 0m6.436s 00:17:26.286 sys 0m5.813s 00:17:26.286 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:26.286 10:45:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:26.286 ************************************ 00:17:26.286 END TEST nvmf_aer 00:17:26.286 ************************************ 00:17:26.546 10:45:53 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:17:26.546 10:45:53 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:26.546 10:45:53 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:26.546 10:45:53 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.546 ************************************ 00:17:26.546 START TEST nvmf_async_init 00:17:26.546 ************************************ 00:17:26.546 10:45:53 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:17:26.546 * Looking for test storage... 00:17:26.546 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:26.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.547 --rc genhtml_branch_coverage=1 00:17:26.547 --rc genhtml_function_coverage=1 00:17:26.547 --rc genhtml_legend=1 00:17:26.547 --rc geninfo_all_blocks=1 00:17:26.547 --rc geninfo_unexecuted_blocks=1 00:17:26.547 00:17:26.547 ' 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:26.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.547 --rc genhtml_branch_coverage=1 00:17:26.547 --rc genhtml_function_coverage=1 00:17:26.547 --rc genhtml_legend=1 00:17:26.547 --rc geninfo_all_blocks=1 00:17:26.547 --rc geninfo_unexecuted_blocks=1 00:17:26.547 00:17:26.547 ' 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:26.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.547 --rc genhtml_branch_coverage=1 00:17:26.547 --rc genhtml_function_coverage=1 00:17:26.547 --rc genhtml_legend=1 00:17:26.547 --rc geninfo_all_blocks=1 00:17:26.547 --rc geninfo_unexecuted_blocks=1 00:17:26.547 00:17:26.547 ' 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:26.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.547 --rc genhtml_branch_coverage=1 00:17:26.547 --rc genhtml_function_coverage=1 00:17:26.547 --rc genhtml_legend=1 00:17:26.547 --rc geninfo_all_blocks=1 00:17:26.547 --rc geninfo_unexecuted_blocks=1 00:17:26.547 00:17:26.547 ' 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:26.547 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:17:26.547 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=2195afd2dc394cd7b165f99f3e5c5739 00:17:26.548 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:17:26.548 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:17:26.548 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:26.548 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:26.548 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:26.548 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:26.548 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.548 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:26.548 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:26.548 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:26.548 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:26.548 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:17:26.548 10:45:54 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:33.119 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:33.119 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:17:33.119 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:33.119 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:33.119 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:33.119 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:33.119 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:33.119 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:17:33.119 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:33.119 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:17:33.119 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:17:33.119 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:17:33.119 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:17:33.119 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:17:33.119 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:17:33.119 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:33.119 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:33.119 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:33.119 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:33.119 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:33.119 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:33.119 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:33.119 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:33.119 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:33.120 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:33.120 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:33.120 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:33.120 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # rdma_device_init 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # uname 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@530 -- # allocate_nic_ips 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:17:33.120 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:33.120 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:33.120 altname enp217s0f0np0 00:17:33.120 altname ens818f0np0 00:17:33.120 inet 192.168.100.8/24 scope global mlx_0_0 00:17:33.120 valid_lft forever preferred_lft forever 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:33.120 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:17:33.120 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:33.120 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:33.120 altname enp217s0f1np1 00:17:33.120 altname ens818f1np1 00:17:33.120 inet 192.168.100.9/24 scope global mlx_0_1 00:17:33.121 valid_lft forever preferred_lft forever 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:17:33.121 192.168.100.9' 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:17:33.121 192.168.100.9' 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # head -n 1 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:17:33.121 192.168.100.9' 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # tail -n +2 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # head -n 1 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=3809336 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 3809336 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # '[' -z 3809336 ']' 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:33.121 [2024-11-07 10:46:00.301332] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:17:33.121 [2024-11-07 10:46:00.301383] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:33.121 [2024-11-07 10:46:00.377199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.121 [2024-11-07 10:46:00.416057] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:33.121 [2024-11-07 10:46:00.416095] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:33.121 [2024-11-07 10:46:00.416105] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:33.121 [2024-11-07 10:46:00.416117] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:33.121 [2024-11-07 10:46:00.416124] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:33.121 [2024-11-07 10:46:00.416747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@866 -- # return 0 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:33.121 [2024-11-07 10:46:00.570436] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xfe1b80/0xfe6070) succeed. 00:17:33.121 [2024-11-07 10:46:00.579248] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xfe3030/0x1027710) succeed. 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:33.121 null0 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 2195afd2dc394cd7b165f99f3e5c5739 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:33.121 [2024-11-07 10:46:00.644869] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.121 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:17:33.122 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.122 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:33.122 nvme0n1 00:17:33.122 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.122 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:33.122 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.122 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:33.122 [ 00:17:33.122 { 00:17:33.122 "name": "nvme0n1", 00:17:33.122 "aliases": [ 00:17:33.122 "2195afd2-dc39-4cd7-b165-f99f3e5c5739" 00:17:33.122 ], 00:17:33.122 "product_name": "NVMe disk", 00:17:33.122 "block_size": 512, 00:17:33.122 "num_blocks": 2097152, 00:17:33.122 "uuid": "2195afd2-dc39-4cd7-b165-f99f3e5c5739", 00:17:33.122 "numa_id": 1, 00:17:33.122 "assigned_rate_limits": { 00:17:33.122 "rw_ios_per_sec": 0, 00:17:33.122 "rw_mbytes_per_sec": 0, 00:17:33.122 "r_mbytes_per_sec": 0, 00:17:33.122 "w_mbytes_per_sec": 0 00:17:33.122 }, 00:17:33.122 "claimed": false, 00:17:33.122 "zoned": false, 00:17:33.122 "supported_io_types": { 00:17:33.122 "read": true, 00:17:33.122 "write": true, 00:17:33.122 "unmap": false, 00:17:33.122 "flush": true, 00:17:33.122 "reset": true, 00:17:33.122 "nvme_admin": true, 00:17:33.122 "nvme_io": true, 00:17:33.122 "nvme_io_md": false, 00:17:33.122 "write_zeroes": true, 00:17:33.122 "zcopy": false, 00:17:33.122 "get_zone_info": false, 00:17:33.122 "zone_management": false, 00:17:33.122 "zone_append": false, 00:17:33.122 "compare": true, 00:17:33.122 "compare_and_write": true, 00:17:33.122 "abort": true, 00:17:33.122 "seek_hole": false, 00:17:33.122 "seek_data": false, 00:17:33.122 "copy": true, 00:17:33.122 "nvme_iov_md": false 00:17:33.122 }, 00:17:33.122 "memory_domains": [ 00:17:33.122 { 00:17:33.122 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:17:33.122 "dma_device_type": 0 00:17:33.122 } 00:17:33.122 ], 00:17:33.122 "driver_specific": { 00:17:33.122 "nvme": [ 00:17:33.122 { 00:17:33.122 "trid": { 00:17:33.122 "trtype": "RDMA", 00:17:33.122 "adrfam": "IPv4", 00:17:33.122 "traddr": "192.168.100.8", 00:17:33.122 "trsvcid": "4420", 00:17:33.122 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:33.122 }, 00:17:33.122 "ctrlr_data": { 00:17:33.122 "cntlid": 1, 00:17:33.122 "vendor_id": "0x8086", 00:17:33.122 "model_number": "SPDK bdev Controller", 00:17:33.122 "serial_number": "00000000000000000000", 00:17:33.122 "firmware_revision": "25.01", 00:17:33.122 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:33.122 "oacs": { 00:17:33.122 "security": 0, 00:17:33.122 "format": 0, 00:17:33.122 "firmware": 0, 00:17:33.122 "ns_manage": 0 00:17:33.122 }, 00:17:33.122 "multi_ctrlr": true, 00:17:33.122 "ana_reporting": false 00:17:33.122 }, 00:17:33.122 "vs": { 00:17:33.122 "nvme_version": "1.3" 00:17:33.122 }, 00:17:33.122 "ns_data": { 00:17:33.122 "id": 1, 00:17:33.122 "can_share": true 00:17:33.122 } 00:17:33.122 } 00:17:33.122 ], 00:17:33.122 "mp_policy": "active_passive" 00:17:33.122 } 00:17:33.122 } 00:17:33.122 ] 00:17:33.122 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.122 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:17:33.122 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.122 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:33.122 [2024-11-07 10:46:00.750249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:17:33.122 [2024-11-07 10:46:00.769682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:33.382 [2024-11-07 10:46:00.792351] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:17:33.382 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.382 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:33.382 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.382 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:33.382 [ 00:17:33.382 { 00:17:33.382 "name": "nvme0n1", 00:17:33.382 "aliases": [ 00:17:33.382 "2195afd2-dc39-4cd7-b165-f99f3e5c5739" 00:17:33.382 ], 00:17:33.382 "product_name": "NVMe disk", 00:17:33.382 "block_size": 512, 00:17:33.382 "num_blocks": 2097152, 00:17:33.382 "uuid": "2195afd2-dc39-4cd7-b165-f99f3e5c5739", 00:17:33.382 "numa_id": 1, 00:17:33.382 "assigned_rate_limits": { 00:17:33.382 "rw_ios_per_sec": 0, 00:17:33.382 "rw_mbytes_per_sec": 0, 00:17:33.382 "r_mbytes_per_sec": 0, 00:17:33.382 "w_mbytes_per_sec": 0 00:17:33.382 }, 00:17:33.382 "claimed": false, 00:17:33.382 "zoned": false, 00:17:33.382 "supported_io_types": { 00:17:33.382 "read": true, 00:17:33.382 "write": true, 00:17:33.382 "unmap": false, 00:17:33.382 "flush": true, 00:17:33.382 "reset": true, 00:17:33.382 "nvme_admin": true, 00:17:33.382 "nvme_io": true, 00:17:33.382 "nvme_io_md": false, 00:17:33.382 "write_zeroes": true, 00:17:33.382 "zcopy": false, 00:17:33.382 "get_zone_info": false, 00:17:33.382 "zone_management": false, 00:17:33.382 "zone_append": false, 00:17:33.382 "compare": true, 00:17:33.382 "compare_and_write": true, 00:17:33.382 "abort": true, 00:17:33.382 "seek_hole": false, 00:17:33.382 "seek_data": false, 00:17:33.382 "copy": true, 00:17:33.382 "nvme_iov_md": false 00:17:33.382 }, 00:17:33.382 "memory_domains": [ 00:17:33.382 { 00:17:33.382 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:17:33.382 "dma_device_type": 0 00:17:33.382 } 00:17:33.382 ], 00:17:33.382 "driver_specific": { 00:17:33.382 "nvme": [ 00:17:33.382 { 00:17:33.382 "trid": { 00:17:33.382 "trtype": "RDMA", 00:17:33.382 "adrfam": "IPv4", 00:17:33.382 "traddr": "192.168.100.8", 00:17:33.382 "trsvcid": "4420", 00:17:33.382 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:33.382 }, 00:17:33.382 "ctrlr_data": { 00:17:33.382 "cntlid": 2, 00:17:33.382 "vendor_id": "0x8086", 00:17:33.382 "model_number": "SPDK bdev Controller", 00:17:33.382 "serial_number": "00000000000000000000", 00:17:33.382 "firmware_revision": "25.01", 00:17:33.382 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:33.382 "oacs": { 00:17:33.382 "security": 0, 00:17:33.382 "format": 0, 00:17:33.382 "firmware": 0, 00:17:33.382 "ns_manage": 0 00:17:33.382 }, 00:17:33.382 "multi_ctrlr": true, 00:17:33.382 "ana_reporting": false 00:17:33.382 }, 00:17:33.382 "vs": { 00:17:33.382 "nvme_version": "1.3" 00:17:33.382 }, 00:17:33.382 "ns_data": { 00:17:33.382 "id": 1, 00:17:33.382 "can_share": true 00:17:33.382 } 00:17:33.382 } 00:17:33.382 ], 00:17:33.382 "mp_policy": "active_passive" 00:17:33.382 } 00:17:33.382 } 00:17:33.382 ] 00:17:33.382 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.382 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:33.382 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.382 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:33.382 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.382 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:17:33.382 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.2FmeNnCJUr 00:17:33.383 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:33.383 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.2FmeNnCJUr 00:17:33.383 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.2FmeNnCJUr 00:17:33.383 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.383 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:33.383 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.383 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:17:33.383 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.383 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:33.383 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.383 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:17:33.383 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.383 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:33.383 [2024-11-07 10:46:00.867796] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:17:33.383 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.383 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:17:33.383 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.383 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:33.383 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.383 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:33.383 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.383 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:33.383 [2024-11-07 10:46:00.883831] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:33.383 nvme0n1 00:17:33.383 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.383 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:33.383 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.383 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:33.383 [ 00:17:33.383 { 00:17:33.383 "name": "nvme0n1", 00:17:33.383 "aliases": [ 00:17:33.383 "2195afd2-dc39-4cd7-b165-f99f3e5c5739" 00:17:33.383 ], 00:17:33.383 "product_name": "NVMe disk", 00:17:33.383 "block_size": 512, 00:17:33.383 "num_blocks": 2097152, 00:17:33.383 "uuid": "2195afd2-dc39-4cd7-b165-f99f3e5c5739", 00:17:33.383 "numa_id": 1, 00:17:33.383 "assigned_rate_limits": { 00:17:33.383 "rw_ios_per_sec": 0, 00:17:33.383 "rw_mbytes_per_sec": 0, 00:17:33.383 "r_mbytes_per_sec": 0, 00:17:33.383 "w_mbytes_per_sec": 0 00:17:33.383 }, 00:17:33.383 "claimed": false, 00:17:33.383 "zoned": false, 00:17:33.383 "supported_io_types": { 00:17:33.383 "read": true, 00:17:33.383 "write": true, 00:17:33.383 "unmap": false, 00:17:33.383 "flush": true, 00:17:33.383 "reset": true, 00:17:33.383 "nvme_admin": true, 00:17:33.383 "nvme_io": true, 00:17:33.383 "nvme_io_md": false, 00:17:33.383 "write_zeroes": true, 00:17:33.383 "zcopy": false, 00:17:33.383 "get_zone_info": false, 00:17:33.383 "zone_management": false, 00:17:33.383 "zone_append": false, 00:17:33.383 "compare": true, 00:17:33.383 "compare_and_write": true, 00:17:33.383 "abort": true, 00:17:33.383 "seek_hole": false, 00:17:33.383 "seek_data": false, 00:17:33.383 "copy": true, 00:17:33.383 "nvme_iov_md": false 00:17:33.383 }, 00:17:33.383 "memory_domains": [ 00:17:33.383 { 00:17:33.383 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:17:33.383 "dma_device_type": 0 00:17:33.383 } 00:17:33.383 ], 00:17:33.383 "driver_specific": { 00:17:33.383 "nvme": [ 00:17:33.383 { 00:17:33.383 "trid": { 00:17:33.383 "trtype": "RDMA", 00:17:33.383 "adrfam": "IPv4", 00:17:33.383 "traddr": "192.168.100.8", 00:17:33.383 "trsvcid": "4421", 00:17:33.383 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:33.383 }, 00:17:33.383 "ctrlr_data": { 00:17:33.383 "cntlid": 3, 00:17:33.383 "vendor_id": "0x8086", 00:17:33.383 "model_number": "SPDK bdev Controller", 00:17:33.383 "serial_number": "00000000000000000000", 00:17:33.383 "firmware_revision": "25.01", 00:17:33.383 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:33.383 "oacs": { 00:17:33.383 "security": 0, 00:17:33.383 "format": 0, 00:17:33.383 "firmware": 0, 00:17:33.383 "ns_manage": 0 00:17:33.383 }, 00:17:33.383 "multi_ctrlr": true, 00:17:33.383 "ana_reporting": false 00:17:33.383 }, 00:17:33.383 "vs": { 00:17:33.383 "nvme_version": "1.3" 00:17:33.383 }, 00:17:33.383 "ns_data": { 00:17:33.383 "id": 1, 00:17:33.383 "can_share": true 00:17:33.383 } 00:17:33.383 } 00:17:33.383 ], 00:17:33.383 "mp_policy": "active_passive" 00:17:33.383 } 00:17:33.383 } 00:17:33.383 ] 00:17:33.383 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.383 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:33.383 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.383 10:46:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:33.383 10:46:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.383 10:46:01 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.2FmeNnCJUr 00:17:33.383 10:46:01 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:17:33.383 10:46:01 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:17:33.383 10:46:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:33.383 10:46:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:17:33.383 10:46:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:33.383 10:46:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:33.383 10:46:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:17:33.383 10:46:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:33.383 10:46:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:33.383 rmmod nvme_rdma 00:17:33.383 rmmod nvme_fabrics 00:17:33.383 10:46:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:33.383 10:46:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:17:33.383 10:46:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:17:33.383 10:46:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 3809336 ']' 00:17:33.383 10:46:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 3809336 00:17:33.383 10:46:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' -z 3809336 ']' 00:17:33.383 10:46:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # kill -0 3809336 00:17:33.383 10:46:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # uname 00:17:33.383 10:46:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:33.383 10:46:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3809336 00:17:33.643 10:46:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:33.643 10:46:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:33.643 10:46:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3809336' 00:17:33.643 killing process with pid 3809336 00:17:33.643 10:46:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@971 -- # kill 3809336 00:17:33.643 10:46:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@976 -- # wait 3809336 00:17:33.643 10:46:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:33.643 10:46:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:17:33.643 00:17:33.643 real 0m7.324s 00:17:33.643 user 0m2.744s 00:17:33.643 sys 0m5.057s 00:17:33.643 10:46:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:33.643 10:46:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:33.643 ************************************ 00:17:33.643 END TEST nvmf_async_init 00:17:33.643 ************************************ 00:17:33.903 10:46:01 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:17:33.903 10:46:01 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:33.903 10:46:01 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:33.903 10:46:01 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.903 ************************************ 00:17:33.903 START TEST dma 00:17:33.903 ************************************ 00:17:33.903 10:46:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:17:33.903 * Looking for test storage... 00:17:33.903 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:17:33.903 10:46:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:33.903 10:46:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:17:33.903 10:46:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:33.903 10:46:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:33.903 10:46:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:33.903 10:46:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:33.903 10:46:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:33.903 10:46:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:17:33.903 10:46:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:17:33.903 10:46:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:17:33.903 10:46:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:17:33.903 10:46:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:17:33.903 10:46:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:17:33.903 10:46:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:17:33.903 10:46:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:33.903 10:46:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:17:33.903 10:46:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:17:33.903 10:46:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:33.903 10:46:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:33.903 10:46:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:17:33.903 10:46:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:17:33.903 10:46:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:33.903 10:46:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:17:33.903 10:46:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:17:33.903 10:46:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:17:33.903 10:46:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:17:33.903 10:46:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:33.903 10:46:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:17:33.903 10:46:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:17:33.903 10:46:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:33.903 10:46:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:33.903 10:46:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:17:33.903 10:46:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:33.903 10:46:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:33.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.903 --rc genhtml_branch_coverage=1 00:17:33.903 --rc genhtml_function_coverage=1 00:17:33.903 --rc genhtml_legend=1 00:17:33.903 --rc geninfo_all_blocks=1 00:17:33.904 --rc geninfo_unexecuted_blocks=1 00:17:33.904 00:17:33.904 ' 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:33.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.904 --rc genhtml_branch_coverage=1 00:17:33.904 --rc genhtml_function_coverage=1 00:17:33.904 --rc genhtml_legend=1 00:17:33.904 --rc geninfo_all_blocks=1 00:17:33.904 --rc geninfo_unexecuted_blocks=1 00:17:33.904 00:17:33.904 ' 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:33.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.904 --rc genhtml_branch_coverage=1 00:17:33.904 --rc genhtml_function_coverage=1 00:17:33.904 --rc genhtml_legend=1 00:17:33.904 --rc geninfo_all_blocks=1 00:17:33.904 --rc geninfo_unexecuted_blocks=1 00:17:33.904 00:17:33.904 ' 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:33.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.904 --rc genhtml_branch_coverage=1 00:17:33.904 --rc genhtml_function_coverage=1 00:17:33.904 --rc genhtml_legend=1 00:17:33.904 --rc geninfo_all_blocks=1 00:17:33.904 --rc geninfo_unexecuted_blocks=1 00:17:33.904 00:17:33.904 ' 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:33.904 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- host/dma.sh@18 -- # subsystem=0 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- host/dma.sh@93 -- # nvmftestinit 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@309 -- # xtrace_disable 00:17:33.904 10:46:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # pci_devs=() 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # net_devs=() 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # e810=() 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # local -ga e810 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # x722=() 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # local -ga x722 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # mlx=() 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # local -ga mlx 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:40.486 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:40.486 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:40.486 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:40.486 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:40.487 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:40.487 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:40.487 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:40.487 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:40.487 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:40.487 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:40.487 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:40.487 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # is_hw=yes 00:17:40.487 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:40.487 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:17:40.487 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:17:40.487 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@448 -- # rdma_device_init 00:17:40.487 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:17:40.487 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # uname 00:17:40.487 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:40.487 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:40.487 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:40.487 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:40.487 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:40.487 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:40.487 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:40.487 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:40.487 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@530 -- # allocate_nic_ips 00:17:40.487 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:40.487 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:40.487 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:40.487 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:40.487 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:40.487 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:40.487 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:40.487 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:40.487 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:40.487 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:40.487 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:40.487 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:17:40.487 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:40.487 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:40.487 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:40.487 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:17:40.747 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:40.747 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:40.747 altname enp217s0f0np0 00:17:40.747 altname ens818f0np0 00:17:40.747 inet 192.168.100.8/24 scope global mlx_0_0 00:17:40.747 valid_lft forever preferred_lft forever 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:17:40.747 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:40.747 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:40.747 altname enp217s0f1np1 00:17:40.747 altname ens818f1np1 00:17:40.747 inet 192.168.100.9/24 scope global mlx_0_1 00:17:40.747 valid_lft forever preferred_lft forever 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@450 -- # return 0 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:17:40.747 192.168.100.9' 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:17:40.747 192.168.100.9' 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # head -n 1 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:17:40.747 192.168.100.9' 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # tail -n +2 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # head -n 1 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@509 -- # nvmfpid=3812806 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@510 -- # waitforlisten 3812806 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@833 -- # '[' -z 3812806 ']' 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:17:40.747 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:40.748 [2024-11-07 10:46:08.335223] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:17:40.748 [2024-11-07 10:46:08.335278] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:40.748 [2024-11-07 10:46:08.412660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:41.007 [2024-11-07 10:46:08.452154] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:41.007 [2024-11-07 10:46:08.452191] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:41.007 [2024-11-07 10:46:08.452201] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:41.007 [2024-11-07 10:46:08.452209] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:41.007 [2024-11-07 10:46:08.452216] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:41.007 [2024-11-07 10:46:08.453485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:41.007 [2024-11-07 10:46:08.453489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.007 10:46:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:41.007 10:46:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@866 -- # return 0 00:17:41.007 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:41.007 10:46:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:41.007 10:46:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:17:41.007 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:41.007 10:46:08 nvmf_rdma.nvmf_host.dma -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:17:41.007 10:46:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.007 10:46:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:17:41.007 [2024-11-07 10:46:08.617858] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x5cc730/0x5d0c20) succeed. 00:17:41.007 [2024-11-07 10:46:08.626904] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x5cdc80/0x6122c0) succeed. 00:17:41.266 10:46:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.266 10:46:08 nvmf_rdma.nvmf_host.dma -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:17:41.266 10:46:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.266 10:46:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:17:41.266 Malloc0 00:17:41.266 10:46:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.266 10:46:08 nvmf_rdma.nvmf_host.dma -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:17:41.266 10:46:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.266 10:46:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:17:41.266 10:46:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.266 10:46:08 nvmf_rdma.nvmf_host.dma -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:17:41.266 10:46:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.266 10:46:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:17:41.266 10:46:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.266 10:46:08 nvmf_rdma.nvmf_host.dma -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:17:41.266 10:46:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.266 10:46:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:17:41.266 [2024-11-07 10:46:08.791987] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:41.266 10:46:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.266 10:46:08 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:17:41.266 10:46:08 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:17:41.266 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@560 -- # config=() 00:17:41.266 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@560 -- # local subsystem config 00:17:41.266 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:41.266 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:41.266 { 00:17:41.266 "params": { 00:17:41.266 "name": "Nvme$subsystem", 00:17:41.266 "trtype": "$TEST_TRANSPORT", 00:17:41.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:41.266 "adrfam": "ipv4", 00:17:41.266 "trsvcid": "$NVMF_PORT", 00:17:41.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:41.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:41.266 "hdgst": ${hdgst:-false}, 00:17:41.266 "ddgst": ${ddgst:-false} 00:17:41.266 }, 00:17:41.266 "method": "bdev_nvme_attach_controller" 00:17:41.266 } 00:17:41.266 EOF 00:17:41.266 )") 00:17:41.266 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@582 -- # cat 00:17:41.266 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@584 -- # jq . 00:17:41.266 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@585 -- # IFS=, 00:17:41.266 10:46:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:41.266 "params": { 00:17:41.266 "name": "Nvme0", 00:17:41.266 "trtype": "rdma", 00:17:41.266 "traddr": "192.168.100.8", 00:17:41.266 "adrfam": "ipv4", 00:17:41.266 "trsvcid": "4420", 00:17:41.266 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:41.266 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:41.266 "hdgst": false, 00:17:41.266 "ddgst": false 00:17:41.266 }, 00:17:41.266 "method": "bdev_nvme_attach_controller" 00:17:41.266 }' 00:17:41.266 [2024-11-07 10:46:08.843217] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:17:41.266 [2024-11-07 10:46:08.843262] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3812988 ] 00:17:41.266 [2024-11-07 10:46:08.915867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:41.526 [2024-11-07 10:46:08.956927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:41.526 [2024-11-07 10:46:08.956930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:46.800 bdev Nvme0n1 reports 1 memory domains 00:17:46.800 bdev Nvme0n1 supports RDMA memory domain 00:17:46.800 Initialization complete, running randrw IO for 5 sec on 2 cores 00:17:46.800 ========================================================================== 00:17:46.800 Latency [us] 00:17:46.800 IOPS MiB/s Average min max 00:17:46.800 Core 2: 21727.89 84.87 735.77 237.84 8560.98 00:17:46.800 Core 3: 21765.28 85.02 734.45 255.96 8464.87 00:17:46.800 ========================================================================== 00:17:46.800 Total : 43493.17 169.90 735.11 237.84 8560.98 00:17:46.800 00:17:46.800 Total operations: 217503, translate 217503 pull_push 0 memzero 0 00:17:46.800 10:46:14 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:17:46.800 10:46:14 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # gen_malloc_json 00:17:46.800 10:46:14 nvmf_rdma.nvmf_host.dma -- host/dma.sh@21 -- # jq . 00:17:46.800 [2024-11-07 10:46:14.373077] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:17:46.800 [2024-11-07 10:46:14.373136] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3813774 ] 00:17:46.800 [2024-11-07 10:46:14.446863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:47.059 [2024-11-07 10:46:14.487573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:47.059 [2024-11-07 10:46:14.487577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:52.331 bdev Malloc0 reports 2 memory domains 00:17:52.331 bdev Malloc0 doesn't support RDMA memory domain 00:17:52.331 Initialization complete, running randrw IO for 5 sec on 2 cores 00:17:52.331 ========================================================================== 00:17:52.331 Latency [us] 00:17:52.331 IOPS MiB/s Average min max 00:17:52.331 Core 2: 14511.60 56.69 1101.89 428.72 1418.02 00:17:52.331 Core 3: 14643.75 57.20 1091.92 412.00 1889.51 00:17:52.331 ========================================================================== 00:17:52.331 Total : 29155.35 113.89 1096.89 412.00 1889.51 00:17:52.331 00:17:52.331 Total operations: 145827, translate 0 pull_push 583308 memzero 0 00:17:52.331 10:46:19 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:17:52.331 10:46:19 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:17:52.331 10:46:19 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:17:52.331 10:46:19 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:17:52.331 Ignoring -M option 00:17:52.331 [2024-11-07 10:46:19.808930] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:17:52.331 [2024-11-07 10:46:19.808987] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3814800 ] 00:17:52.331 [2024-11-07 10:46:19.881865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:52.331 [2024-11-07 10:46:19.921741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:52.331 [2024-11-07 10:46:19.921744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:58.898 bdev 3ccbbabe-77fe-4835-a6d0-a2e53acd7cdd reports 1 memory domains 00:17:58.898 bdev 3ccbbabe-77fe-4835-a6d0-a2e53acd7cdd supports RDMA memory domain 00:17:58.898 Initialization complete, running randread IO for 5 sec on 2 cores 00:17:58.898 ========================================================================== 00:17:58.898 Latency [us] 00:17:58.898 IOPS MiB/s Average min max 00:17:58.898 Core 2: 68947.62 269.33 231.16 83.67 3825.76 00:17:58.898 Core 3: 69970.28 273.32 227.77 68.24 2295.11 00:17:58.898 ========================================================================== 00:17:58.898 Total : 138917.90 542.65 229.45 68.24 3825.76 00:17:58.898 00:17:58.898 Total operations: 694682, translate 0 pull_push 0 memzero 694682 00:17:58.898 10:46:25 nvmf_rdma.nvmf_host.dma -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:17:58.898 [2024-11-07 10:46:25.468946] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:00.275 Initializing NVMe Controllers 00:18:00.275 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:18:00.275 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:18:00.275 Initialization complete. Launching workers. 00:18:00.275 ======================================================== 00:18:00.275 Latency(us) 00:18:00.275 Device Information : IOPS MiB/s Average min max 00:18:00.275 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 1996.91 7.80 7979.99 5984.33 9977.76 00:18:00.275 ======================================================== 00:18:00.275 Total : 1996.91 7.80 7979.99 5984.33 9977.76 00:18:00.275 00:18:00.275 10:46:27 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:18:00.275 10:46:27 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:18:00.275 10:46:27 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:18:00.275 10:46:27 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:18:00.275 [2024-11-07 10:46:27.804262] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:18:00.275 [2024-11-07 10:46:27.804320] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3816100 ] 00:18:00.275 [2024-11-07 10:46:27.875273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:00.275 [2024-11-07 10:46:27.915128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:00.275 [2024-11-07 10:46:27.915131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:06.844 bdev d5307ca6-7208-4d7b-a57a-db82120adc0a reports 1 memory domains 00:18:06.844 bdev d5307ca6-7208-4d7b-a57a-db82120adc0a supports RDMA memory domain 00:18:06.844 Initialization complete, running randrw IO for 5 sec on 2 cores 00:18:06.844 ========================================================================== 00:18:06.844 Latency [us] 00:18:06.844 IOPS MiB/s Average min max 00:18:06.844 Core 2: 19130.49 74.73 835.73 30.72 13199.44 00:18:06.844 Core 3: 19377.85 75.69 825.02 11.87 12830.04 00:18:06.844 ========================================================================== 00:18:06.844 Total : 38508.34 150.42 830.34 11.87 13199.44 00:18:06.844 00:18:06.844 Total operations: 192567, translate 192460 pull_push 0 memzero 107 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host.dma -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host.dma -- host/dma.sh@120 -- # nvmftestfini 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@121 -- # sync 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@124 -- # set +e 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:06.844 rmmod nvme_rdma 00:18:06.844 rmmod nvme_fabrics 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@128 -- # set -e 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@129 -- # return 0 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@517 -- # '[' -n 3812806 ']' 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@518 -- # killprocess 3812806 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@952 -- # '[' -z 3812806 ']' 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@956 -- # kill -0 3812806 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@957 -- # uname 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3812806 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3812806' 00:18:06.844 killing process with pid 3812806 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@971 -- # kill 3812806 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@976 -- # wait 3812806 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:18:06.844 00:18:06.844 real 0m32.385s 00:18:06.844 user 1m34.969s 00:18:06.844 sys 0m6.368s 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:18:06.844 ************************************ 00:18:06.844 END TEST dma 00:18:06.844 ************************************ 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.844 ************************************ 00:18:06.844 START TEST nvmf_identify 00:18:06.844 ************************************ 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:18:06.844 * Looking for test storage... 00:18:06.844 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:06.844 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:06.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.845 --rc genhtml_branch_coverage=1 00:18:06.845 --rc genhtml_function_coverage=1 00:18:06.845 --rc genhtml_legend=1 00:18:06.845 --rc geninfo_all_blocks=1 00:18:06.845 --rc geninfo_unexecuted_blocks=1 00:18:06.845 00:18:06.845 ' 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:06.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.845 --rc genhtml_branch_coverage=1 00:18:06.845 --rc genhtml_function_coverage=1 00:18:06.845 --rc genhtml_legend=1 00:18:06.845 --rc geninfo_all_blocks=1 00:18:06.845 --rc geninfo_unexecuted_blocks=1 00:18:06.845 00:18:06.845 ' 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:06.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.845 --rc genhtml_branch_coverage=1 00:18:06.845 --rc genhtml_function_coverage=1 00:18:06.845 --rc genhtml_legend=1 00:18:06.845 --rc geninfo_all_blocks=1 00:18:06.845 --rc geninfo_unexecuted_blocks=1 00:18:06.845 00:18:06.845 ' 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:06.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.845 --rc genhtml_branch_coverage=1 00:18:06.845 --rc genhtml_function_coverage=1 00:18:06.845 --rc genhtml_legend=1 00:18:06.845 --rc geninfo_all_blocks=1 00:18:06.845 --rc geninfo_unexecuted_blocks=1 00:18:06.845 00:18:06.845 ' 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:06.845 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:06.845 10:46:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:06.845 10:46:34 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:06.845 10:46:34 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:06.845 10:46:34 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:18:06.845 10:46:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:18:06.845 10:46:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:06.845 10:46:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:06.845 10:46:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:06.845 10:46:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:06.845 10:46:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:06.845 10:46:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:06.845 10:46:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:06.845 10:46:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:06.845 10:46:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:06.845 10:46:34 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:18:06.845 10:46:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:13.527 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:13.527 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:13.527 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:13.527 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:18:13.527 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # rdma_device_init 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # uname 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@530 -- # allocate_nic_ips 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:13.528 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:13.528 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:13.528 altname enp217s0f0np0 00:18:13.528 altname ens818f0np0 00:18:13.528 inet 192.168.100.8/24 scope global mlx_0_0 00:18:13.528 valid_lft forever preferred_lft forever 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:13.528 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:13.528 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:13.528 altname enp217s0f1np1 00:18:13.528 altname ens818f1np1 00:18:13.528 inet 192.168.100.9/24 scope global mlx_0_1 00:18:13.528 valid_lft forever preferred_lft forever 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:18:13.528 192.168.100.9' 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:18:13.528 192.168.100.9' 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # head -n 1 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:18:13.528 192.168.100.9' 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # tail -n +2 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # head -n 1 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3820278 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3820278 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 3820278 ']' 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:13.528 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.529 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:13.529 10:46:40 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:13.529 [2024-11-07 10:46:40.905697] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:18:13.529 [2024-11-07 10:46:40.905745] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:13.529 [2024-11-07 10:46:40.983111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:13.529 [2024-11-07 10:46:41.022559] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:13.529 [2024-11-07 10:46:41.022600] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:13.529 [2024-11-07 10:46:41.022609] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:13.529 [2024-11-07 10:46:41.022616] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:13.529 [2024-11-07 10:46:41.022623] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:13.529 [2024-11-07 10:46:41.024217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:13.529 [2024-11-07 10:46:41.024322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:13.529 [2024-11-07 10:46:41.024416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:13.529 [2024-11-07 10:46:41.024419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.097 10:46:41 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:14.097 10:46:41 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:18:14.097 10:46:41 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:14.097 10:46:41 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.097 10:46:41 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:14.356 [2024-11-07 10:46:41.772574] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x14e9df0/0x14ee2e0) succeed. 00:18:14.356 [2024-11-07 10:46:41.781976] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x14eb480/0x152f980) succeed. 00:18:14.356 10:46:41 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.356 10:46:41 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:18:14.356 10:46:41 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:14.356 10:46:41 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:14.356 10:46:41 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:14.356 10:46:41 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.356 10:46:41 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:14.356 Malloc0 00:18:14.356 10:46:41 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.356 10:46:41 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:14.356 10:46:41 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.356 10:46:41 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:14.356 10:46:41 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.356 10:46:41 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:18:14.356 10:46:41 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.356 10:46:41 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:14.356 10:46:42 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.356 10:46:42 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:14.356 10:46:42 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.356 10:46:42 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:14.356 [2024-11-07 10:46:42.013688] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:14.356 10:46:42 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.356 10:46:42 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:18:14.356 10:46:42 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.356 10:46:42 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:14.356 10:46:42 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.356 10:46:42 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:18:14.356 10:46:42 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.622 10:46:42 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:14.622 [ 00:18:14.622 { 00:18:14.622 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:14.622 "subtype": "Discovery", 00:18:14.622 "listen_addresses": [ 00:18:14.622 { 00:18:14.622 "trtype": "RDMA", 00:18:14.622 "adrfam": "IPv4", 00:18:14.622 "traddr": "192.168.100.8", 00:18:14.622 "trsvcid": "4420" 00:18:14.622 } 00:18:14.622 ], 00:18:14.622 "allow_any_host": true, 00:18:14.622 "hosts": [] 00:18:14.622 }, 00:18:14.622 { 00:18:14.622 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:14.622 "subtype": "NVMe", 00:18:14.622 "listen_addresses": [ 00:18:14.622 { 00:18:14.622 "trtype": "RDMA", 00:18:14.622 "adrfam": "IPv4", 00:18:14.622 "traddr": "192.168.100.8", 00:18:14.622 "trsvcid": "4420" 00:18:14.622 } 00:18:14.622 ], 00:18:14.622 "allow_any_host": true, 00:18:14.622 "hosts": [], 00:18:14.622 "serial_number": "SPDK00000000000001", 00:18:14.622 "model_number": "SPDK bdev Controller", 00:18:14.622 "max_namespaces": 32, 00:18:14.622 "min_cntlid": 1, 00:18:14.622 "max_cntlid": 65519, 00:18:14.622 "namespaces": [ 00:18:14.622 { 00:18:14.622 "nsid": 1, 00:18:14.622 "bdev_name": "Malloc0", 00:18:14.622 "name": "Malloc0", 00:18:14.622 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:18:14.622 "eui64": "ABCDEF0123456789", 00:18:14.622 "uuid": "03cdb203-c9d5-4841-bf94-ff537845fca1" 00:18:14.622 } 00:18:14.622 ] 00:18:14.622 } 00:18:14.622 ] 00:18:14.622 10:46:42 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.622 10:46:42 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:18:14.622 [2024-11-07 10:46:42.071860] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:18:14.622 [2024-11-07 10:46:42.071899] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3820557 ] 00:18:14.622 [2024-11-07 10:46:42.134711] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:18:14.622 [2024-11-07 10:46:42.134797] nvme_rdma.c:2206:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:18:14.622 [2024-11-07 10:46:42.134812] nvme_rdma.c:1204:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:18:14.622 [2024-11-07 10:46:42.134817] nvme_rdma.c:1208:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:18:14.622 [2024-11-07 10:46:42.134847] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:18:14.622 [2024-11-07 10:46:42.145936] nvme_rdma.c: 427:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:18:14.622 [2024-11-07 10:46:42.156014] nvme_rdma.c:1090:nvme_rdma_connect_established: *DEBUG*: rc =0 00:18:14.622 [2024-11-07 10:46:42.156024] nvme_rdma.c:1095:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:18:14.622 [2024-11-07 10:46:42.156032] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x181b00 00:18:14.622 [2024-11-07 10:46:42.156040] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x181b00 00:18:14.622 [2024-11-07 10:46:42.156046] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x181b00 00:18:14.622 [2024-11-07 10:46:42.156055] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x181b00 00:18:14.622 [2024-11-07 10:46:42.156061] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x181b00 00:18:14.622 [2024-11-07 10:46:42.156067] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x181b00 00:18:14.622 [2024-11-07 10:46:42.156073] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x181b00 00:18:14.622 [2024-11-07 10:46:42.156079] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x181b00 00:18:14.622 [2024-11-07 10:46:42.156085] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x181b00 00:18:14.622 [2024-11-07 10:46:42.156091] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x181b00 00:18:14.622 [2024-11-07 10:46:42.156097] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x181b00 00:18:14.622 [2024-11-07 10:46:42.156103] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x181b00 00:18:14.622 [2024-11-07 10:46:42.156109] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x181b00 00:18:14.622 [2024-11-07 10:46:42.156115] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x181b00 00:18:14.622 [2024-11-07 10:46:42.156121] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x181b00 00:18:14.622 [2024-11-07 10:46:42.156127] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x181b00 00:18:14.622 [2024-11-07 10:46:42.156133] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x181b00 00:18:14.622 [2024-11-07 10:46:42.156139] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x181b00 00:18:14.622 [2024-11-07 10:46:42.156145] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x181b00 00:18:14.622 [2024-11-07 10:46:42.156151] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x181b00 00:18:14.622 [2024-11-07 10:46:42.156157] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x181b00 00:18:14.622 [2024-11-07 10:46:42.156163] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x181b00 00:18:14.622 [2024-11-07 10:46:42.156169] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x181b00 00:18:14.622 [2024-11-07 10:46:42.156175] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x181b00 00:18:14.622 [2024-11-07 10:46:42.156181] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x181b00 00:18:14.622 [2024-11-07 10:46:42.156187] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x181b00 00:18:14.622 [2024-11-07 10:46:42.156193] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x181b00 00:18:14.622 [2024-11-07 10:46:42.156199] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x181b00 00:18:14.622 [2024-11-07 10:46:42.156205] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x181b00 00:18:14.622 [2024-11-07 10:46:42.156211] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x181b00 00:18:14.622 [2024-11-07 10:46:42.156217] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x181b00 00:18:14.622 [2024-11-07 10:46:42.156223] nvme_rdma.c:1109:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:18:14.622 [2024-11-07 10:46:42.156228] nvme_rdma.c:1112:nvme_rdma_connect_established: *DEBUG*: rc =0 00:18:14.622 [2024-11-07 10:46:42.156233] nvme_rdma.c:1117:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:18:14.622 [2024-11-07 10:46:42.156253] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181b00 00:18:14.622 [2024-11-07 10:46:42.156268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x181b00 00:18:14.622 [2024-11-07 10:46:42.161513] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.622 [2024-11-07 10:46:42.161523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:18:14.622 [2024-11-07 10:46:42.161532] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x181b00 00:18:14.622 [2024-11-07 10:46:42.161539] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:18:14.622 [2024-11-07 10:46:42.161547] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:18:14.622 [2024-11-07 10:46:42.161554] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:18:14.622 [2024-11-07 10:46:42.161569] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181b00 00:18:14.622 [2024-11-07 10:46:42.161577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.622 [2024-11-07 10:46:42.161600] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.622 [2024-11-07 10:46:42.161606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:18:14.622 [2024-11-07 10:46:42.161612] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:18:14.623 [2024-11-07 10:46:42.161618] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x181b00 00:18:14.623 [2024-11-07 10:46:42.161625] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:18:14.623 [2024-11-07 10:46:42.161632] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181b00 00:18:14.623 [2024-11-07 10:46:42.161640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.623 [2024-11-07 10:46:42.161657] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.623 [2024-11-07 10:46:42.161663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:18:14.623 [2024-11-07 10:46:42.161670] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:18:14.623 [2024-11-07 10:46:42.161676] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x181b00 00:18:14.623 [2024-11-07 10:46:42.161684] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:18:14.623 [2024-11-07 10:46:42.161691] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181b00 00:18:14.623 [2024-11-07 10:46:42.161698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.623 [2024-11-07 10:46:42.161723] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.623 [2024-11-07 10:46:42.161728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:14.623 [2024-11-07 10:46:42.161735] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:14.623 [2024-11-07 10:46:42.161741] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x181b00 00:18:14.623 [2024-11-07 10:46:42.161749] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181b00 00:18:14.623 [2024-11-07 10:46:42.161759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.623 [2024-11-07 10:46:42.161778] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.623 [2024-11-07 10:46:42.161784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:14.623 [2024-11-07 10:46:42.161790] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:18:14.623 [2024-11-07 10:46:42.161796] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:18:14.623 [2024-11-07 10:46:42.161802] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x181b00 00:18:14.623 [2024-11-07 10:46:42.161809] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:14.623 [2024-11-07 10:46:42.161919] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:18:14.623 [2024-11-07 10:46:42.161925] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:14.623 [2024-11-07 10:46:42.161935] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181b00 00:18:14.623 [2024-11-07 10:46:42.161942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.623 [2024-11-07 10:46:42.161963] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.623 [2024-11-07 10:46:42.161968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:14.623 [2024-11-07 10:46:42.161975] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:14.623 [2024-11-07 10:46:42.161981] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x181b00 00:18:14.623 [2024-11-07 10:46:42.161989] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181b00 00:18:14.623 [2024-11-07 10:46:42.161996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.623 [2024-11-07 10:46:42.162017] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.623 [2024-11-07 10:46:42.162022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:18:14.623 [2024-11-07 10:46:42.162029] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:14.623 [2024-11-07 10:46:42.162035] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:18:14.623 [2024-11-07 10:46:42.162040] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x181b00 00:18:14.623 [2024-11-07 10:46:42.162048] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:18:14.623 [2024-11-07 10:46:42.162056] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:18:14.623 [2024-11-07 10:46:42.162067] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181b00 00:18:14.623 [2024-11-07 10:46:42.162075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x181b00 00:18:14.623 [2024-11-07 10:46:42.162114] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.623 [2024-11-07 10:46:42.162120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:14.623 [2024-11-07 10:46:42.162128] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:18:14.623 [2024-11-07 10:46:42.162134] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:18:14.623 [2024-11-07 10:46:42.162140] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:18:14.623 [2024-11-07 10:46:42.162149] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:18:14.623 [2024-11-07 10:46:42.162155] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:18:14.623 [2024-11-07 10:46:42.162161] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:18:14.623 [2024-11-07 10:46:42.162166] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x181b00 00:18:14.623 [2024-11-07 10:46:42.162174] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:18:14.623 [2024-11-07 10:46:42.162182] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181b00 00:18:14.623 [2024-11-07 10:46:42.162190] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.623 [2024-11-07 10:46:42.162211] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.623 [2024-11-07 10:46:42.162216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:14.623 [2024-11-07 10:46:42.162225] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x181b00 00:18:14.623 [2024-11-07 10:46:42.162232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.623 [2024-11-07 10:46:42.162239] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x181b00 00:18:14.623 [2024-11-07 10:46:42.162246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.623 [2024-11-07 10:46:42.162253] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.623 [2024-11-07 10:46:42.162260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.623 [2024-11-07 10:46:42.162267] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x181b00 00:18:14.623 [2024-11-07 10:46:42.162274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.623 [2024-11-07 10:46:42.162279] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:18:14.623 [2024-11-07 10:46:42.162285] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x181b00 00:18:14.623 [2024-11-07 10:46:42.162293] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:14.623 [2024-11-07 10:46:42.162301] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181b00 00:18:14.623 [2024-11-07 10:46:42.162308] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.623 [2024-11-07 10:46:42.162330] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.623 [2024-11-07 10:46:42.162336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:18:14.623 [2024-11-07 10:46:42.162343] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:18:14.623 [2024-11-07 10:46:42.162349] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:18:14.623 [2024-11-07 10:46:42.162355] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x181b00 00:18:14.623 [2024-11-07 10:46:42.162364] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181b00 00:18:14.623 [2024-11-07 10:46:42.162371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x181b00 00:18:14.623 [2024-11-07 10:46:42.162393] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.624 [2024-11-07 10:46:42.162398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:14.624 [2024-11-07 10:46:42.162406] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x181b00 00:18:14.624 [2024-11-07 10:46:42.162415] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:18:14.624 [2024-11-07 10:46:42.162438] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181b00 00:18:14.624 [2024-11-07 10:46:42.162446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x181b00 00:18:14.624 [2024-11-07 10:46:42.162454] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x181b00 00:18:14.624 [2024-11-07 10:46:42.162461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.624 [2024-11-07 10:46:42.162482] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.624 [2024-11-07 10:46:42.162488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:14.624 [2024-11-07 10:46:42.162499] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x181b00 00:18:14.624 [2024-11-07 10:46:42.162511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x181b00 00:18:14.624 [2024-11-07 10:46:42.162517] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x181b00 00:18:14.624 [2024-11-07 10:46:42.162523] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.624 [2024-11-07 10:46:42.162529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:14.624 [2024-11-07 10:46:42.162535] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x181b00 00:18:14.624 [2024-11-07 10:46:42.162541] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.624 [2024-11-07 10:46:42.162546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:14.624 [2024-11-07 10:46:42.162556] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x181b00 00:18:14.624 [2024-11-07 10:46:42.162563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x181b00 00:18:14.624 [2024-11-07 10:46:42.162571] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x181b00 00:18:14.624 [2024-11-07 10:46:42.162593] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.624 [2024-11-07 10:46:42.162598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:14.624 [2024-11-07 10:46:42.162609] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x181b00 00:18:14.624 ===================================================== 00:18:14.624 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:18:14.624 ===================================================== 00:18:14.624 Controller Capabilities/Features 00:18:14.624 ================================ 00:18:14.624 Vendor ID: 0000 00:18:14.624 Subsystem Vendor ID: 0000 00:18:14.624 Serial Number: .................... 00:18:14.624 Model Number: ........................................ 00:18:14.624 Firmware Version: 25.01 00:18:14.624 Recommended Arb Burst: 0 00:18:14.624 IEEE OUI Identifier: 00 00 00 00:18:14.624 Multi-path I/O 00:18:14.624 May have multiple subsystem ports: No 00:18:14.624 May have multiple controllers: No 00:18:14.624 Associated with SR-IOV VF: No 00:18:14.624 Max Data Transfer Size: 131072 00:18:14.624 Max Number of Namespaces: 0 00:18:14.624 Max Number of I/O Queues: 1024 00:18:14.624 NVMe Specification Version (VS): 1.3 00:18:14.624 NVMe Specification Version (Identify): 1.3 00:18:14.624 Maximum Queue Entries: 128 00:18:14.624 Contiguous Queues Required: Yes 00:18:14.624 Arbitration Mechanisms Supported 00:18:14.624 Weighted Round Robin: Not Supported 00:18:14.624 Vendor Specific: Not Supported 00:18:14.624 Reset Timeout: 15000 ms 00:18:14.624 Doorbell Stride: 4 bytes 00:18:14.624 NVM Subsystem Reset: Not Supported 00:18:14.624 Command Sets Supported 00:18:14.624 NVM Command Set: Supported 00:18:14.624 Boot Partition: Not Supported 00:18:14.624 Memory Page Size Minimum: 4096 bytes 00:18:14.624 Memory Page Size Maximum: 4096 bytes 00:18:14.624 Persistent Memory Region: Not Supported 00:18:14.624 Optional Asynchronous Events Supported 00:18:14.624 Namespace Attribute Notices: Not Supported 00:18:14.624 Firmware Activation Notices: Not Supported 00:18:14.624 ANA Change Notices: Not Supported 00:18:14.624 PLE Aggregate Log Change Notices: Not Supported 00:18:14.624 LBA Status Info Alert Notices: Not Supported 00:18:14.624 EGE Aggregate Log Change Notices: Not Supported 00:18:14.624 Normal NVM Subsystem Shutdown event: Not Supported 00:18:14.624 Zone Descriptor Change Notices: Not Supported 00:18:14.624 Discovery Log Change Notices: Supported 00:18:14.624 Controller Attributes 00:18:14.624 128-bit Host Identifier: Not Supported 00:18:14.624 Non-Operational Permissive Mode: Not Supported 00:18:14.624 NVM Sets: Not Supported 00:18:14.624 Read Recovery Levels: Not Supported 00:18:14.624 Endurance Groups: Not Supported 00:18:14.624 Predictable Latency Mode: Not Supported 00:18:14.624 Traffic Based Keep ALive: Not Supported 00:18:14.624 Namespace Granularity: Not Supported 00:18:14.624 SQ Associations: Not Supported 00:18:14.624 UUID List: Not Supported 00:18:14.624 Multi-Domain Subsystem: Not Supported 00:18:14.624 Fixed Capacity Management: Not Supported 00:18:14.624 Variable Capacity Management: Not Supported 00:18:14.624 Delete Endurance Group: Not Supported 00:18:14.624 Delete NVM Set: Not Supported 00:18:14.624 Extended LBA Formats Supported: Not Supported 00:18:14.624 Flexible Data Placement Supported: Not Supported 00:18:14.624 00:18:14.624 Controller Memory Buffer Support 00:18:14.624 ================================ 00:18:14.624 Supported: No 00:18:14.624 00:18:14.624 Persistent Memory Region Support 00:18:14.624 ================================ 00:18:14.624 Supported: No 00:18:14.624 00:18:14.624 Admin Command Set Attributes 00:18:14.624 ============================ 00:18:14.624 Security Send/Receive: Not Supported 00:18:14.624 Format NVM: Not Supported 00:18:14.624 Firmware Activate/Download: Not Supported 00:18:14.624 Namespace Management: Not Supported 00:18:14.624 Device Self-Test: Not Supported 00:18:14.624 Directives: Not Supported 00:18:14.624 NVMe-MI: Not Supported 00:18:14.624 Virtualization Management: Not Supported 00:18:14.624 Doorbell Buffer Config: Not Supported 00:18:14.624 Get LBA Status Capability: Not Supported 00:18:14.624 Command & Feature Lockdown Capability: Not Supported 00:18:14.624 Abort Command Limit: 1 00:18:14.624 Async Event Request Limit: 4 00:18:14.624 Number of Firmware Slots: N/A 00:18:14.624 Firmware Slot 1 Read-Only: N/A 00:18:14.624 Firmware Activation Without Reset: N/A 00:18:14.624 Multiple Update Detection Support: N/A 00:18:14.624 Firmware Update Granularity: No Information Provided 00:18:14.624 Per-Namespace SMART Log: No 00:18:14.624 Asymmetric Namespace Access Log Page: Not Supported 00:18:14.624 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:18:14.624 Command Effects Log Page: Not Supported 00:18:14.624 Get Log Page Extended Data: Supported 00:18:14.624 Telemetry Log Pages: Not Supported 00:18:14.624 Persistent Event Log Pages: Not Supported 00:18:14.624 Supported Log Pages Log Page: May Support 00:18:14.624 Commands Supported & Effects Log Page: Not Supported 00:18:14.624 Feature Identifiers & Effects Log Page:May Support 00:18:14.624 NVMe-MI Commands & Effects Log Page: May Support 00:18:14.624 Data Area 4 for Telemetry Log: Not Supported 00:18:14.624 Error Log Page Entries Supported: 128 00:18:14.624 Keep Alive: Not Supported 00:18:14.624 00:18:14.624 NVM Command Set Attributes 00:18:14.624 ========================== 00:18:14.624 Submission Queue Entry Size 00:18:14.624 Max: 1 00:18:14.624 Min: 1 00:18:14.624 Completion Queue Entry Size 00:18:14.624 Max: 1 00:18:14.624 Min: 1 00:18:14.624 Number of Namespaces: 0 00:18:14.624 Compare Command: Not Supported 00:18:14.625 Write Uncorrectable Command: Not Supported 00:18:14.625 Dataset Management Command: Not Supported 00:18:14.625 Write Zeroes Command: Not Supported 00:18:14.625 Set Features Save Field: Not Supported 00:18:14.625 Reservations: Not Supported 00:18:14.625 Timestamp: Not Supported 00:18:14.625 Copy: Not Supported 00:18:14.625 Volatile Write Cache: Not Present 00:18:14.625 Atomic Write Unit (Normal): 1 00:18:14.625 Atomic Write Unit (PFail): 1 00:18:14.625 Atomic Compare & Write Unit: 1 00:18:14.625 Fused Compare & Write: Supported 00:18:14.625 Scatter-Gather List 00:18:14.625 SGL Command Set: Supported 00:18:14.625 SGL Keyed: Supported 00:18:14.625 SGL Bit Bucket Descriptor: Not Supported 00:18:14.625 SGL Metadata Pointer: Not Supported 00:18:14.625 Oversized SGL: Not Supported 00:18:14.625 SGL Metadata Address: Not Supported 00:18:14.625 SGL Offset: Supported 00:18:14.625 Transport SGL Data Block: Not Supported 00:18:14.625 Replay Protected Memory Block: Not Supported 00:18:14.625 00:18:14.625 Firmware Slot Information 00:18:14.625 ========================= 00:18:14.625 Active slot: 0 00:18:14.625 00:18:14.625 00:18:14.625 Error Log 00:18:14.625 ========= 00:18:14.625 00:18:14.625 Active Namespaces 00:18:14.625 ================= 00:18:14.625 Discovery Log Page 00:18:14.625 ================== 00:18:14.625 Generation Counter: 2 00:18:14.625 Number of Records: 2 00:18:14.625 Record Format: 0 00:18:14.625 00:18:14.625 Discovery Log Entry 0 00:18:14.625 ---------------------- 00:18:14.625 Transport Type: 1 (RDMA) 00:18:14.625 Address Family: 1 (IPv4) 00:18:14.625 Subsystem Type: 3 (Current Discovery Subsystem) 00:18:14.625 Entry Flags: 00:18:14.625 Duplicate Returned Information: 1 00:18:14.625 Explicit Persistent Connection Support for Discovery: 1 00:18:14.625 Transport Requirements: 00:18:14.625 Secure Channel: Not Required 00:18:14.625 Port ID: 0 (0x0000) 00:18:14.625 Controller ID: 65535 (0xffff) 00:18:14.625 Admin Max SQ Size: 128 00:18:14.625 Transport Service Identifier: 4420 00:18:14.625 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:18:14.625 Transport Address: 192.168.100.8 00:18:14.625 Transport Specific Address Subtype - RDMA 00:18:14.625 RDMA QP Service Type: 1 (Reliable Connected) 00:18:14.625 RDMA Provider Type: 1 (No provider specified) 00:18:14.625 RDMA CM Service: 1 (RDMA_CM) 00:18:14.625 Discovery Log Entry 1 00:18:14.625 ---------------------- 00:18:14.625 Transport Type: 1 (RDMA) 00:18:14.625 Address Family: 1 (IPv4) 00:18:14.625 Subsystem Type: 2 (NVM Subsystem) 00:18:14.625 Entry Flags: 00:18:14.625 Duplicate Returned Information: 0 00:18:14.625 Explicit Persistent Connection Support for Discovery: 0 00:18:14.625 Transport Requirements: 00:18:14.625 Secure Channel: Not Required 00:18:14.625 Port ID: 0 (0x0000) 00:18:14.625 Controller ID: 65535 (0xffff) 00:18:14.625 Admin Max SQ Size: [2024-11-07 10:46:42.162679] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:18:14.625 [2024-11-07 10:46:42.162689] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 57001 doesn't match qid 00:18:14.625 [2024-11-07 10:46:42.162703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32715 cdw0:c6151c40 sqhd:9a40 p:0 m:0 dnr:0 00:18:14.625 [2024-11-07 10:46:42.162710] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 57001 doesn't match qid 00:18:14.625 [2024-11-07 10:46:42.162718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32715 cdw0:c6151c40 sqhd:9a40 p:0 m:0 dnr:0 00:18:14.625 [2024-11-07 10:46:42.162724] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 57001 doesn't match qid 00:18:14.625 [2024-11-07 10:46:42.162731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32715 cdw0:c6151c40 sqhd:9a40 p:0 m:0 dnr:0 00:18:14.625 [2024-11-07 10:46:42.162738] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 57001 doesn't match qid 00:18:14.625 [2024-11-07 10:46:42.162745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32715 cdw0:c6151c40 sqhd:9a40 p:0 m:0 dnr:0 00:18:14.625 [2024-11-07 10:46:42.162757] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x181b00 00:18:14.625 [2024-11-07 10:46:42.162765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.625 [2024-11-07 10:46:42.162779] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.625 [2024-11-07 10:46:42.162784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:18:14.625 [2024-11-07 10:46:42.162793] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.625 [2024-11-07 10:46:42.162801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.625 [2024-11-07 10:46:42.162807] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x181b00 00:18:14.625 [2024-11-07 10:46:42.162825] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.625 [2024-11-07 10:46:42.162831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:14.625 [2024-11-07 10:46:42.162838] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:18:14.625 [2024-11-07 10:46:42.162844] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:18:14.625 [2024-11-07 10:46:42.162850] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x181b00 00:18:14.625 [2024-11-07 10:46:42.162858] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.625 [2024-11-07 10:46:42.162866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.625 [2024-11-07 10:46:42.162882] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.625 [2024-11-07 10:46:42.162887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:18:14.625 [2024-11-07 10:46:42.162896] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x181b00 00:18:14.625 [2024-11-07 10:46:42.162905] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.625 [2024-11-07 10:46:42.162913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.625 [2024-11-07 10:46:42.162929] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.625 [2024-11-07 10:46:42.162935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:18:14.625 [2024-11-07 10:46:42.162941] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x181b00 00:18:14.625 [2024-11-07 10:46:42.162949] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.625 [2024-11-07 10:46:42.162957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.625 [2024-11-07 10:46:42.162975] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.625 [2024-11-07 10:46:42.162980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:18:14.625 [2024-11-07 10:46:42.162987] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x181b00 00:18:14.625 [2024-11-07 10:46:42.162995] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.625 [2024-11-07 10:46:42.163003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.625 [2024-11-07 10:46:42.163026] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.625 [2024-11-07 10:46:42.163032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:18:14.626 [2024-11-07 10:46:42.163039] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x181b00 00:18:14.626 [2024-11-07 10:46:42.163047] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.626 [2024-11-07 10:46:42.163055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.626 [2024-11-07 10:46:42.163077] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.626 [2024-11-07 10:46:42.163082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:18:14.626 [2024-11-07 10:46:42.163089] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x181b00 00:18:14.626 [2024-11-07 10:46:42.163098] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.626 [2024-11-07 10:46:42.163105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.626 [2024-11-07 10:46:42.163121] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.626 [2024-11-07 10:46:42.163127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:18:14.626 [2024-11-07 10:46:42.163133] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x181b00 00:18:14.626 [2024-11-07 10:46:42.163142] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.626 [2024-11-07 10:46:42.163149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.626 [2024-11-07 10:46:42.163167] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.626 [2024-11-07 10:46:42.163172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:18:14.626 [2024-11-07 10:46:42.163180] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x181b00 00:18:14.626 [2024-11-07 10:46:42.163189] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.626 [2024-11-07 10:46:42.163197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.626 [2024-11-07 10:46:42.163211] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.626 [2024-11-07 10:46:42.163216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:18:14.626 [2024-11-07 10:46:42.163223] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x181b00 00:18:14.626 [2024-11-07 10:46:42.163231] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.626 [2024-11-07 10:46:42.163239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.626 [2024-11-07 10:46:42.163259] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.626 [2024-11-07 10:46:42.163264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:18:14.626 [2024-11-07 10:46:42.163270] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x181b00 00:18:14.626 [2024-11-07 10:46:42.163279] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.626 [2024-11-07 10:46:42.163286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.626 [2024-11-07 10:46:42.163304] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.626 [2024-11-07 10:46:42.163309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:18:14.626 [2024-11-07 10:46:42.163315] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x181b00 00:18:14.626 [2024-11-07 10:46:42.163324] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.626 [2024-11-07 10:46:42.163332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.626 [2024-11-07 10:46:42.163353] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.626 [2024-11-07 10:46:42.163358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:18:14.626 [2024-11-07 10:46:42.163365] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x181b00 00:18:14.626 [2024-11-07 10:46:42.163373] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.626 [2024-11-07 10:46:42.163381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.626 [2024-11-07 10:46:42.163406] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.626 [2024-11-07 10:46:42.163411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:18:14.626 [2024-11-07 10:46:42.163418] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x181b00 00:18:14.626 [2024-11-07 10:46:42.163426] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.626 [2024-11-07 10:46:42.163434] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.626 [2024-11-07 10:46:42.163453] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.626 [2024-11-07 10:46:42.163460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:18:14.626 [2024-11-07 10:46:42.163467] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x181b00 00:18:14.626 [2024-11-07 10:46:42.163475] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.626 [2024-11-07 10:46:42.163483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.626 [2024-11-07 10:46:42.163498] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.626 [2024-11-07 10:46:42.163504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:18:14.626 [2024-11-07 10:46:42.163515] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x181b00 00:18:14.626 [2024-11-07 10:46:42.163523] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.626 [2024-11-07 10:46:42.163531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.626 [2024-11-07 10:46:42.163554] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.626 [2024-11-07 10:46:42.163560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:18:14.626 [2024-11-07 10:46:42.163566] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x181b00 00:18:14.626 [2024-11-07 10:46:42.163575] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.626 [2024-11-07 10:46:42.163582] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.626 [2024-11-07 10:46:42.163600] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.626 [2024-11-07 10:46:42.163605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:18:14.626 [2024-11-07 10:46:42.163612] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x181b00 00:18:14.626 [2024-11-07 10:46:42.163620] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.626 [2024-11-07 10:46:42.163628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.626 [2024-11-07 10:46:42.163643] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.626 [2024-11-07 10:46:42.163649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:18:14.626 [2024-11-07 10:46:42.163655] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x181b00 00:18:14.626 [2024-11-07 10:46:42.163664] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.626 [2024-11-07 10:46:42.163671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.626 [2024-11-07 10:46:42.163693] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.626 [2024-11-07 10:46:42.163698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:18:14.626 [2024-11-07 10:46:42.163704] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x181b00 00:18:14.626 [2024-11-07 10:46:42.163713] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.626 [2024-11-07 10:46:42.163720] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.626 [2024-11-07 10:46:42.163736] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.626 [2024-11-07 10:46:42.163745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:18:14.626 [2024-11-07 10:46:42.163751] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x181b00 00:18:14.626 [2024-11-07 10:46:42.163760] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.626 [2024-11-07 10:46:42.163768] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.626 [2024-11-07 10:46:42.163785] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.626 [2024-11-07 10:46:42.163791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:18:14.626 [2024-11-07 10:46:42.163797] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x181b00 00:18:14.626 [2024-11-07 10:46:42.163805] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.626 [2024-11-07 10:46:42.163813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.626 [2024-11-07 10:46:42.163831] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.626 [2024-11-07 10:46:42.163836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:18:14.626 [2024-11-07 10:46:42.163842] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x181b00 00:18:14.626 [2024-11-07 10:46:42.163851] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.626 [2024-11-07 10:46:42.163858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.626 [2024-11-07 10:46:42.163880] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.627 [2024-11-07 10:46:42.163885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:18:14.627 [2024-11-07 10:46:42.163891] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x181b00 00:18:14.627 [2024-11-07 10:46:42.163900] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.627 [2024-11-07 10:46:42.163908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.627 [2024-11-07 10:46:42.163929] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.627 [2024-11-07 10:46:42.163934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:18:14.627 [2024-11-07 10:46:42.163940] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x181b00 00:18:14.627 [2024-11-07 10:46:42.163949] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.627 [2024-11-07 10:46:42.163957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.627 [2024-11-07 10:46:42.163980] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.627 [2024-11-07 10:46:42.163985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:18:14.627 [2024-11-07 10:46:42.163992] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x181b00 00:18:14.627 [2024-11-07 10:46:42.164000] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.627 [2024-11-07 10:46:42.164008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.627 [2024-11-07 10:46:42.164031] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.627 [2024-11-07 10:46:42.164036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:18:14.627 [2024-11-07 10:46:42.164043] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x181b00 00:18:14.627 [2024-11-07 10:46:42.164051] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.627 [2024-11-07 10:46:42.164059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.627 [2024-11-07 10:46:42.164078] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.627 [2024-11-07 10:46:42.164084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:18:14.627 [2024-11-07 10:46:42.164090] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x181b00 00:18:14.627 [2024-11-07 10:46:42.164098] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.627 [2024-11-07 10:46:42.164106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.627 [2024-11-07 10:46:42.164124] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.627 [2024-11-07 10:46:42.164129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:18:14.627 [2024-11-07 10:46:42.164135] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x181b00 00:18:14.627 [2024-11-07 10:46:42.164144] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.627 [2024-11-07 10:46:42.164152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.627 [2024-11-07 10:46:42.164175] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.627 [2024-11-07 10:46:42.164181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:18:14.627 [2024-11-07 10:46:42.164187] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x181b00 00:18:14.627 [2024-11-07 10:46:42.164195] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.627 [2024-11-07 10:46:42.164203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.627 [2024-11-07 10:46:42.164219] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.627 [2024-11-07 10:46:42.164224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:18:14.627 [2024-11-07 10:46:42.164230] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x181b00 00:18:14.627 [2024-11-07 10:46:42.164239] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.627 [2024-11-07 10:46:42.164246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.627 [2024-11-07 10:46:42.164270] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.627 [2024-11-07 10:46:42.164275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:18:14.627 [2024-11-07 10:46:42.164281] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x181b00 00:18:14.627 [2024-11-07 10:46:42.164290] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.627 [2024-11-07 10:46:42.164298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.627 [2024-11-07 10:46:42.164319] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.627 [2024-11-07 10:46:42.164324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:18:14.627 [2024-11-07 10:46:42.164330] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x181b00 00:18:14.627 [2024-11-07 10:46:42.164339] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.627 [2024-11-07 10:46:42.164347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.627 [2024-11-07 10:46:42.164370] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.627 [2024-11-07 10:46:42.164375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:18:14.627 [2024-11-07 10:46:42.164381] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x181b00 00:18:14.627 [2024-11-07 10:46:42.164390] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.627 [2024-11-07 10:46:42.164398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.627 [2024-11-07 10:46:42.164419] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.627 [2024-11-07 10:46:42.164424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:18:14.627 [2024-11-07 10:46:42.164431] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x181b00 00:18:14.627 [2024-11-07 10:46:42.164439] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.627 [2024-11-07 10:46:42.164447] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.627 [2024-11-07 10:46:42.164464] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.627 [2024-11-07 10:46:42.164470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:18:14.627 [2024-11-07 10:46:42.164476] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x181b00 00:18:14.627 [2024-11-07 10:46:42.164484] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.627 [2024-11-07 10:46:42.164492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.627 [2024-11-07 10:46:42.164515] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.627 [2024-11-07 10:46:42.164521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:18:14.627 [2024-11-07 10:46:42.164527] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x181b00 00:18:14.627 [2024-11-07 10:46:42.164535] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.627 [2024-11-07 10:46:42.164543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.627 [2024-11-07 10:46:42.164561] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.627 [2024-11-07 10:46:42.164566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:18:14.627 [2024-11-07 10:46:42.164572] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x181b00 00:18:14.627 [2024-11-07 10:46:42.164581] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.627 [2024-11-07 10:46:42.164590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.628 [2024-11-07 10:46:42.164604] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.628 [2024-11-07 10:46:42.164610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:18:14.628 [2024-11-07 10:46:42.164616] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x181b00 00:18:14.628 [2024-11-07 10:46:42.164625] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.628 [2024-11-07 10:46:42.164632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.628 [2024-11-07 10:46:42.164652] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.628 [2024-11-07 10:46:42.164657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:18:14.628 [2024-11-07 10:46:42.164663] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x181b00 00:18:14.628 [2024-11-07 10:46:42.164672] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.628 [2024-11-07 10:46:42.164680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.628 [2024-11-07 10:46:42.164701] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.628 [2024-11-07 10:46:42.164706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:18:14.628 [2024-11-07 10:46:42.164712] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x181b00 00:18:14.628 [2024-11-07 10:46:42.164721] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.628 [2024-11-07 10:46:42.164729] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.628 [2024-11-07 10:46:42.164748] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.628 [2024-11-07 10:46:42.164753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:18:14.628 [2024-11-07 10:46:42.164760] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x181b00 00:18:14.628 [2024-11-07 10:46:42.164768] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.628 [2024-11-07 10:46:42.164776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.628 [2024-11-07 10:46:42.164795] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.628 [2024-11-07 10:46:42.164801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:18:14.628 [2024-11-07 10:46:42.164807] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x181b00 00:18:14.628 [2024-11-07 10:46:42.164815] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.628 [2024-11-07 10:46:42.164823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.628 [2024-11-07 10:46:42.164839] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.628 [2024-11-07 10:46:42.164844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:18:14.628 [2024-11-07 10:46:42.164850] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x181b00 00:18:14.628 [2024-11-07 10:46:42.164859] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.628 [2024-11-07 10:46:42.164869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.628 [2024-11-07 10:46:42.164884] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.628 [2024-11-07 10:46:42.164890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:18:14.628 [2024-11-07 10:46:42.164896] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x181b00 00:18:14.628 [2024-11-07 10:46:42.164905] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.628 [2024-11-07 10:46:42.164912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.628 [2024-11-07 10:46:42.164934] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.628 [2024-11-07 10:46:42.164939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:18:14.628 [2024-11-07 10:46:42.164945] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x181b00 00:18:14.628 [2024-11-07 10:46:42.164954] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.628 [2024-11-07 10:46:42.164961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.628 [2024-11-07 10:46:42.164979] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.628 [2024-11-07 10:46:42.164984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:18:14.628 [2024-11-07 10:46:42.164991] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x181b00 00:18:14.628 [2024-11-07 10:46:42.164999] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.628 [2024-11-07 10:46:42.165007] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.628 [2024-11-07 10:46:42.165028] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.628 [2024-11-07 10:46:42.165034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:18:14.628 [2024-11-07 10:46:42.165040] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x181b00 00:18:14.628 [2024-11-07 10:46:42.165048] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.628 [2024-11-07 10:46:42.165056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.628 [2024-11-07 10:46:42.165079] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.628 [2024-11-07 10:46:42.165085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:18:14.628 [2024-11-07 10:46:42.165091] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x181b00 00:18:14.628 [2024-11-07 10:46:42.165099] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.628 [2024-11-07 10:46:42.165107] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.628 [2024-11-07 10:46:42.165128] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.628 [2024-11-07 10:46:42.165134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:18:14.628 [2024-11-07 10:46:42.165140] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x181b00 00:18:14.628 [2024-11-07 10:46:42.165150] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.628 [2024-11-07 10:46:42.165158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.628 [2024-11-07 10:46:42.165177] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.628 [2024-11-07 10:46:42.165183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:18:14.628 [2024-11-07 10:46:42.165189] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x181b00 00:18:14.628 [2024-11-07 10:46:42.165197] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.628 [2024-11-07 10:46:42.165205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.628 [2024-11-07 10:46:42.165226] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.628 [2024-11-07 10:46:42.165232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:18:14.628 [2024-11-07 10:46:42.165238] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x181b00 00:18:14.628 [2024-11-07 10:46:42.165247] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.628 [2024-11-07 10:46:42.165254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.628 [2024-11-07 10:46:42.165274] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.628 [2024-11-07 10:46:42.165279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:18:14.628 [2024-11-07 10:46:42.165285] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x181b00 00:18:14.628 [2024-11-07 10:46:42.165294] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.628 [2024-11-07 10:46:42.165301] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.628 [2024-11-07 10:46:42.165319] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.628 [2024-11-07 10:46:42.165324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:18:14.628 [2024-11-07 10:46:42.165331] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x181b00 00:18:14.628 [2024-11-07 10:46:42.165339] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.628 [2024-11-07 10:46:42.165347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.628 [2024-11-07 10:46:42.165364] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.628 [2024-11-07 10:46:42.165370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:18:14.628 [2024-11-07 10:46:42.165376] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x181b00 00:18:14.628 [2024-11-07 10:46:42.165384] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.628 [2024-11-07 10:46:42.165392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.628 [2024-11-07 10:46:42.165415] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.628 [2024-11-07 10:46:42.165421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:18:14.628 [2024-11-07 10:46:42.165427] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x181b00 00:18:14.628 [2024-11-07 10:46:42.165437] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.628 [2024-11-07 10:46:42.165445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.628 [2024-11-07 10:46:42.165466] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.629 [2024-11-07 10:46:42.165471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:18:14.629 [2024-11-07 10:46:42.165478] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x181b00 00:18:14.629 [2024-11-07 10:46:42.165486] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.629 [2024-11-07 10:46:42.165494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.629 [2024-11-07 10:46:42.169513] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.629 [2024-11-07 10:46:42.169521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:18:14.629 [2024-11-07 10:46:42.169527] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x181b00 00:18:14.629 [2024-11-07 10:46:42.169536] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.629 [2024-11-07 10:46:42.169544] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.629 [2024-11-07 10:46:42.169560] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.629 [2024-11-07 10:46:42.169565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:000a p:0 m:0 dnr:0 00:18:14.629 [2024-11-07 10:46:42.169571] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x181b00 00:18:14.629 [2024-11-07 10:46:42.169578] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:18:14.629 128 00:18:14.629 Transport Service Identifier: 4420 00:18:14.629 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:18:14.629 Transport Address: 192.168.100.8 00:18:14.629 Transport Specific Address Subtype - RDMA 00:18:14.629 RDMA QP Service Type: 1 (Reliable Connected) 00:18:14.629 RDMA Provider Type: 1 (No provider specified) 00:18:14.629 RDMA CM Service: 1 (RDMA_CM) 00:18:14.629 10:46:42 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:18:14.629 [2024-11-07 10:46:42.243280] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:18:14.629 [2024-11-07 10:46:42.243320] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3820559 ] 00:18:14.891 [2024-11-07 10:46:42.303700] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:18:14.891 [2024-11-07 10:46:42.303771] nvme_rdma.c:2206:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:18:14.892 [2024-11-07 10:46:42.303784] nvme_rdma.c:1204:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:18:14.892 [2024-11-07 10:46:42.303791] nvme_rdma.c:1208:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:18:14.892 [2024-11-07 10:46:42.303815] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:18:14.892 [2024-11-07 10:46:42.322019] nvme_rdma.c: 427:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:18:14.892 [2024-11-07 10:46:42.332086] nvme_rdma.c:1090:nvme_rdma_connect_established: *DEBUG*: rc =0 00:18:14.892 [2024-11-07 10:46:42.332097] nvme_rdma.c:1095:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:18:14.892 [2024-11-07 10:46:42.332104] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x181b00 00:18:14.892 [2024-11-07 10:46:42.332111] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x181b00 00:18:14.892 [2024-11-07 10:46:42.332117] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x181b00 00:18:14.892 [2024-11-07 10:46:42.332123] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x181b00 00:18:14.892 [2024-11-07 10:46:42.332129] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x181b00 00:18:14.892 [2024-11-07 10:46:42.332135] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x181b00 00:18:14.892 [2024-11-07 10:46:42.332141] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x181b00 00:18:14.892 [2024-11-07 10:46:42.332147] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x181b00 00:18:14.892 [2024-11-07 10:46:42.332153] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x181b00 00:18:14.892 [2024-11-07 10:46:42.332159] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x181b00 00:18:14.892 [2024-11-07 10:46:42.332165] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x181b00 00:18:14.892 [2024-11-07 10:46:42.332171] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x181b00 00:18:14.892 [2024-11-07 10:46:42.332177] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x181b00 00:18:14.892 [2024-11-07 10:46:42.332183] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x181b00 00:18:14.892 [2024-11-07 10:46:42.332189] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x181b00 00:18:14.892 [2024-11-07 10:46:42.332195] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x181b00 00:18:14.892 [2024-11-07 10:46:42.332201] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x181b00 00:18:14.892 [2024-11-07 10:46:42.332207] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x181b00 00:18:14.892 [2024-11-07 10:46:42.332214] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x181b00 00:18:14.892 [2024-11-07 10:46:42.332220] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x181b00 00:18:14.892 [2024-11-07 10:46:42.332226] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x181b00 00:18:14.892 [2024-11-07 10:46:42.332232] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x181b00 00:18:14.892 [2024-11-07 10:46:42.332238] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x181b00 00:18:14.892 [2024-11-07 10:46:42.332244] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x181b00 00:18:14.892 [2024-11-07 10:46:42.332250] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x181b00 00:18:14.892 [2024-11-07 10:46:42.332256] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x181b00 00:18:14.892 [2024-11-07 10:46:42.332262] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x181b00 00:18:14.892 [2024-11-07 10:46:42.332270] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x181b00 00:18:14.892 [2024-11-07 10:46:42.332276] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x181b00 00:18:14.892 [2024-11-07 10:46:42.332282] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x181b00 00:18:14.892 [2024-11-07 10:46:42.332288] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x181b00 00:18:14.892 [2024-11-07 10:46:42.332294] nvme_rdma.c:1109:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:18:14.892 [2024-11-07 10:46:42.332299] nvme_rdma.c:1112:nvme_rdma_connect_established: *DEBUG*: rc =0 00:18:14.892 [2024-11-07 10:46:42.332303] nvme_rdma.c:1117:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:18:14.892 [2024-11-07 10:46:42.332321] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181b00 00:18:14.892 [2024-11-07 10:46:42.332334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x181b00 00:18:14.892 [2024-11-07 10:46:42.337513] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.892 [2024-11-07 10:46:42.337524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:18:14.892 [2024-11-07 10:46:42.337531] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x181b00 00:18:14.892 [2024-11-07 10:46:42.337539] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:18:14.892 [2024-11-07 10:46:42.337546] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:18:14.892 [2024-11-07 10:46:42.337552] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:18:14.892 [2024-11-07 10:46:42.337566] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181b00 00:18:14.892 [2024-11-07 10:46:42.337575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.892 [2024-11-07 10:46:42.337592] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.892 [2024-11-07 10:46:42.337598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:18:14.892 [2024-11-07 10:46:42.337604] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:18:14.892 [2024-11-07 10:46:42.337610] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x181b00 00:18:14.892 [2024-11-07 10:46:42.337617] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:18:14.892 [2024-11-07 10:46:42.337624] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181b00 00:18:14.892 [2024-11-07 10:46:42.337632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.892 [2024-11-07 10:46:42.337652] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.892 [2024-11-07 10:46:42.337658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:18:14.892 [2024-11-07 10:46:42.337664] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:18:14.892 [2024-11-07 10:46:42.337670] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x181b00 00:18:14.892 [2024-11-07 10:46:42.337677] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:18:14.892 [2024-11-07 10:46:42.337685] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181b00 00:18:14.892 [2024-11-07 10:46:42.337694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.892 [2024-11-07 10:46:42.337714] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.892 [2024-11-07 10:46:42.337720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:14.892 [2024-11-07 10:46:42.337726] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:14.892 [2024-11-07 10:46:42.337732] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x181b00 00:18:14.892 [2024-11-07 10:46:42.337741] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181b00 00:18:14.892 [2024-11-07 10:46:42.337748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.892 [2024-11-07 10:46:42.337764] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.892 [2024-11-07 10:46:42.337770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:14.892 [2024-11-07 10:46:42.337776] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:18:14.892 [2024-11-07 10:46:42.337782] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:18:14.892 [2024-11-07 10:46:42.337788] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x181b00 00:18:14.892 [2024-11-07 10:46:42.337795] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:14.892 [2024-11-07 10:46:42.337904] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:18:14.892 [2024-11-07 10:46:42.337910] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:14.892 [2024-11-07 10:46:42.337918] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181b00 00:18:14.892 [2024-11-07 10:46:42.337926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.892 [2024-11-07 10:46:42.337942] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.892 [2024-11-07 10:46:42.337947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:14.892 [2024-11-07 10:46:42.337954] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:14.892 [2024-11-07 10:46:42.337959] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x181b00 00:18:14.893 [2024-11-07 10:46:42.337968] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181b00 00:18:14.893 [2024-11-07 10:46:42.337975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.893 [2024-11-07 10:46:42.337993] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.893 [2024-11-07 10:46:42.337999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:18:14.893 [2024-11-07 10:46:42.338005] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:14.893 [2024-11-07 10:46:42.338011] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:18:14.893 [2024-11-07 10:46:42.338018] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x181b00 00:18:14.893 [2024-11-07 10:46:42.338025] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:18:14.893 [2024-11-07 10:46:42.338034] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:18:14.893 [2024-11-07 10:46:42.338043] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181b00 00:18:14.893 [2024-11-07 10:46:42.338051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x181b00 00:18:14.893 [2024-11-07 10:46:42.338085] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.893 [2024-11-07 10:46:42.338090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:14.893 [2024-11-07 10:46:42.338098] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:18:14.893 [2024-11-07 10:46:42.338104] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:18:14.893 [2024-11-07 10:46:42.338110] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:18:14.893 [2024-11-07 10:46:42.338117] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:18:14.893 [2024-11-07 10:46:42.338123] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:18:14.893 [2024-11-07 10:46:42.338129] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:18:14.893 [2024-11-07 10:46:42.338134] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x181b00 00:18:14.893 [2024-11-07 10:46:42.338142] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:18:14.893 [2024-11-07 10:46:42.338149] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181b00 00:18:14.893 [2024-11-07 10:46:42.338157] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.893 [2024-11-07 10:46:42.338177] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.893 [2024-11-07 10:46:42.338182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:14.893 [2024-11-07 10:46:42.338190] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x181b00 00:18:14.893 [2024-11-07 10:46:42.338197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.893 [2024-11-07 10:46:42.338204] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x181b00 00:18:14.893 [2024-11-07 10:46:42.338211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.893 [2024-11-07 10:46:42.338218] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.893 [2024-11-07 10:46:42.338225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.893 [2024-11-07 10:46:42.338232] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x181b00 00:18:14.893 [2024-11-07 10:46:42.338239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.893 [2024-11-07 10:46:42.338245] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:18:14.893 [2024-11-07 10:46:42.338252] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x181b00 00:18:14.893 [2024-11-07 10:46:42.338260] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:14.893 [2024-11-07 10:46:42.338268] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181b00 00:18:14.893 [2024-11-07 10:46:42.338275] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.893 [2024-11-07 10:46:42.338295] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.893 [2024-11-07 10:46:42.338301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:18:14.893 [2024-11-07 10:46:42.338307] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:18:14.893 [2024-11-07 10:46:42.338313] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:14.893 [2024-11-07 10:46:42.338319] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x181b00 00:18:14.893 [2024-11-07 10:46:42.338326] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:18:14.893 [2024-11-07 10:46:42.338334] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:18:14.893 [2024-11-07 10:46:42.338341] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181b00 00:18:14.893 [2024-11-07 10:46:42.338348] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.893 [2024-11-07 10:46:42.338368] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.893 [2024-11-07 10:46:42.338374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:18:14.893 [2024-11-07 10:46:42.338425] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:18:14.893 [2024-11-07 10:46:42.338431] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x181b00 00:18:14.893 [2024-11-07 10:46:42.338439] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:18:14.893 [2024-11-07 10:46:42.338447] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181b00 00:18:14.893 [2024-11-07 10:46:42.338455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x181b00 00:18:14.893 [2024-11-07 10:46:42.338484] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.893 [2024-11-07 10:46:42.338490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:14.893 [2024-11-07 10:46:42.338500] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:18:14.893 [2024-11-07 10:46:42.338518] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:18:14.893 [2024-11-07 10:46:42.338524] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x181b00 00:18:14.893 [2024-11-07 10:46:42.338532] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:18:14.893 [2024-11-07 10:46:42.338542] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181b00 00:18:14.893 [2024-11-07 10:46:42.338550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x181b00 00:18:14.893 [2024-11-07 10:46:42.338587] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.893 [2024-11-07 10:46:42.338593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:14.893 [2024-11-07 10:46:42.338606] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:14.893 [2024-11-07 10:46:42.338612] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x181b00 00:18:14.893 [2024-11-07 10:46:42.338620] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:14.893 [2024-11-07 10:46:42.338628] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181b00 00:18:14.893 [2024-11-07 10:46:42.338636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x181b00 00:18:14.893 [2024-11-07 10:46:42.338659] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.893 [2024-11-07 10:46:42.338665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:14.893 [2024-11-07 10:46:42.338674] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:14.893 [2024-11-07 10:46:42.338680] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x181b00 00:18:14.893 [2024-11-07 10:46:42.338687] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:18:14.893 [2024-11-07 10:46:42.338698] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:18:14.893 [2024-11-07 10:46:42.338705] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:18:14.893 [2024-11-07 10:46:42.338711] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:14.893 [2024-11-07 10:46:42.338717] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:18:14.893 [2024-11-07 10:46:42.338724] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:18:14.893 [2024-11-07 10:46:42.338729] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:18:14.893 [2024-11-07 10:46:42.338736] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:18:14.894 [2024-11-07 10:46:42.338749] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181b00 00:18:14.894 [2024-11-07 10:46:42.338757] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.894 [2024-11-07 10:46:42.338765] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x181b00 00:18:14.894 [2024-11-07 10:46:42.338772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.894 [2024-11-07 10:46:42.338782] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.894 [2024-11-07 10:46:42.338789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:14.894 [2024-11-07 10:46:42.338795] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x181b00 00:18:14.894 [2024-11-07 10:46:42.338801] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.894 [2024-11-07 10:46:42.338807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:14.894 [2024-11-07 10:46:42.338813] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x181b00 00:18:14.894 [2024-11-07 10:46:42.338822] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x181b00 00:18:14.894 [2024-11-07 10:46:42.338829] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.894 [2024-11-07 10:46:42.338846] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.894 [2024-11-07 10:46:42.338852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:14.894 [2024-11-07 10:46:42.338858] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x181b00 00:18:14.894 [2024-11-07 10:46:42.338867] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x181b00 00:18:14.894 [2024-11-07 10:46:42.338875] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.894 [2024-11-07 10:46:42.338894] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.894 [2024-11-07 10:46:42.338900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:14.894 [2024-11-07 10:46:42.338906] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x181b00 00:18:14.894 [2024-11-07 10:46:42.338915] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x181b00 00:18:14.894 [2024-11-07 10:46:42.338922] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.894 [2024-11-07 10:46:42.338942] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.894 [2024-11-07 10:46:42.338948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:18:14.894 [2024-11-07 10:46:42.338954] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x181b00 00:18:14.894 [2024-11-07 10:46:42.338967] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x181b00 00:18:14.894 [2024-11-07 10:46:42.338975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x181b00 00:18:14.894 [2024-11-07 10:46:42.338983] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181b00 00:18:14.894 [2024-11-07 10:46:42.338990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x181b00 00:18:14.894 [2024-11-07 10:46:42.338999] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x181b00 00:18:14.894 [2024-11-07 10:46:42.339006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x181b00 00:18:14.894 [2024-11-07 10:46:42.339014] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x181b00 00:18:14.894 [2024-11-07 10:46:42.339022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x181b00 00:18:14.894 [2024-11-07 10:46:42.339032] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.894 [2024-11-07 10:46:42.339037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:14.894 [2024-11-07 10:46:42.339049] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x181b00 00:18:14.894 [2024-11-07 10:46:42.339055] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.894 [2024-11-07 10:46:42.339061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:14.894 [2024-11-07 10:46:42.339071] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x181b00 00:18:14.894 [2024-11-07 10:46:42.339077] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.894 [2024-11-07 10:46:42.339083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:14.894 [2024-11-07 10:46:42.339090] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x181b00 00:18:14.894 [2024-11-07 10:46:42.339096] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.894 [2024-11-07 10:46:42.339101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:14.894 [2024-11-07 10:46:42.339110] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x181b00 00:18:14.894 ===================================================== 00:18:14.894 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:18:14.894 ===================================================== 00:18:14.894 Controller Capabilities/Features 00:18:14.894 ================================ 00:18:14.894 Vendor ID: 8086 00:18:14.894 Subsystem Vendor ID: 8086 00:18:14.894 Serial Number: SPDK00000000000001 00:18:14.894 Model Number: SPDK bdev Controller 00:18:14.894 Firmware Version: 25.01 00:18:14.894 Recommended Arb Burst: 6 00:18:14.894 IEEE OUI Identifier: e4 d2 5c 00:18:14.894 Multi-path I/O 00:18:14.894 May have multiple subsystem ports: Yes 00:18:14.894 May have multiple controllers: Yes 00:18:14.894 Associated with SR-IOV VF: No 00:18:14.894 Max Data Transfer Size: 131072 00:18:14.894 Max Number of Namespaces: 32 00:18:14.894 Max Number of I/O Queues: 127 00:18:14.894 NVMe Specification Version (VS): 1.3 00:18:14.894 NVMe Specification Version (Identify): 1.3 00:18:14.894 Maximum Queue Entries: 128 00:18:14.894 Contiguous Queues Required: Yes 00:18:14.894 Arbitration Mechanisms Supported 00:18:14.894 Weighted Round Robin: Not Supported 00:18:14.894 Vendor Specific: Not Supported 00:18:14.894 Reset Timeout: 15000 ms 00:18:14.894 Doorbell Stride: 4 bytes 00:18:14.894 NVM Subsystem Reset: Not Supported 00:18:14.894 Command Sets Supported 00:18:14.894 NVM Command Set: Supported 00:18:14.894 Boot Partition: Not Supported 00:18:14.894 Memory Page Size Minimum: 4096 bytes 00:18:14.894 Memory Page Size Maximum: 4096 bytes 00:18:14.894 Persistent Memory Region: Not Supported 00:18:14.894 Optional Asynchronous Events Supported 00:18:14.894 Namespace Attribute Notices: Supported 00:18:14.894 Firmware Activation Notices: Not Supported 00:18:14.894 ANA Change Notices: Not Supported 00:18:14.894 PLE Aggregate Log Change Notices: Not Supported 00:18:14.894 LBA Status Info Alert Notices: Not Supported 00:18:14.894 EGE Aggregate Log Change Notices: Not Supported 00:18:14.894 Normal NVM Subsystem Shutdown event: Not Supported 00:18:14.894 Zone Descriptor Change Notices: Not Supported 00:18:14.894 Discovery Log Change Notices: Not Supported 00:18:14.894 Controller Attributes 00:18:14.894 128-bit Host Identifier: Supported 00:18:14.894 Non-Operational Permissive Mode: Not Supported 00:18:14.894 NVM Sets: Not Supported 00:18:14.894 Read Recovery Levels: Not Supported 00:18:14.894 Endurance Groups: Not Supported 00:18:14.894 Predictable Latency Mode: Not Supported 00:18:14.894 Traffic Based Keep ALive: Not Supported 00:18:14.894 Namespace Granularity: Not Supported 00:18:14.894 SQ Associations: Not Supported 00:18:14.894 UUID List: Not Supported 00:18:14.894 Multi-Domain Subsystem: Not Supported 00:18:14.894 Fixed Capacity Management: Not Supported 00:18:14.894 Variable Capacity Management: Not Supported 00:18:14.894 Delete Endurance Group: Not Supported 00:18:14.894 Delete NVM Set: Not Supported 00:18:14.894 Extended LBA Formats Supported: Not Supported 00:18:14.894 Flexible Data Placement Supported: Not Supported 00:18:14.894 00:18:14.894 Controller Memory Buffer Support 00:18:14.894 ================================ 00:18:14.894 Supported: No 00:18:14.894 00:18:14.894 Persistent Memory Region Support 00:18:14.894 ================================ 00:18:14.894 Supported: No 00:18:14.894 00:18:14.894 Admin Command Set Attributes 00:18:14.894 ============================ 00:18:14.894 Security Send/Receive: Not Supported 00:18:14.894 Format NVM: Not Supported 00:18:14.894 Firmware Activate/Download: Not Supported 00:18:14.894 Namespace Management: Not Supported 00:18:14.894 Device Self-Test: Not Supported 00:18:14.894 Directives: Not Supported 00:18:14.894 NVMe-MI: Not Supported 00:18:14.894 Virtualization Management: Not Supported 00:18:14.894 Doorbell Buffer Config: Not Supported 00:18:14.894 Get LBA Status Capability: Not Supported 00:18:14.894 Command & Feature Lockdown Capability: Not Supported 00:18:14.894 Abort Command Limit: 4 00:18:14.894 Async Event Request Limit: 4 00:18:14.894 Number of Firmware Slots: N/A 00:18:14.894 Firmware Slot 1 Read-Only: N/A 00:18:14.894 Firmware Activation Without Reset: N/A 00:18:14.894 Multiple Update Detection Support: N/A 00:18:14.894 Firmware Update Granularity: No Information Provided 00:18:14.895 Per-Namespace SMART Log: No 00:18:14.895 Asymmetric Namespace Access Log Page: Not Supported 00:18:14.895 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:18:14.895 Command Effects Log Page: Supported 00:18:14.895 Get Log Page Extended Data: Supported 00:18:14.895 Telemetry Log Pages: Not Supported 00:18:14.895 Persistent Event Log Pages: Not Supported 00:18:14.895 Supported Log Pages Log Page: May Support 00:18:14.895 Commands Supported & Effects Log Page: Not Supported 00:18:14.895 Feature Identifiers & Effects Log Page:May Support 00:18:14.895 NVMe-MI Commands & Effects Log Page: May Support 00:18:14.895 Data Area 4 for Telemetry Log: Not Supported 00:18:14.895 Error Log Page Entries Supported: 128 00:18:14.895 Keep Alive: Supported 00:18:14.895 Keep Alive Granularity: 10000 ms 00:18:14.895 00:18:14.895 NVM Command Set Attributes 00:18:14.895 ========================== 00:18:14.895 Submission Queue Entry Size 00:18:14.895 Max: 64 00:18:14.895 Min: 64 00:18:14.895 Completion Queue Entry Size 00:18:14.895 Max: 16 00:18:14.895 Min: 16 00:18:14.895 Number of Namespaces: 32 00:18:14.895 Compare Command: Supported 00:18:14.895 Write Uncorrectable Command: Not Supported 00:18:14.895 Dataset Management Command: Supported 00:18:14.895 Write Zeroes Command: Supported 00:18:14.895 Set Features Save Field: Not Supported 00:18:14.895 Reservations: Supported 00:18:14.895 Timestamp: Not Supported 00:18:14.895 Copy: Supported 00:18:14.895 Volatile Write Cache: Present 00:18:14.895 Atomic Write Unit (Normal): 1 00:18:14.895 Atomic Write Unit (PFail): 1 00:18:14.895 Atomic Compare & Write Unit: 1 00:18:14.895 Fused Compare & Write: Supported 00:18:14.895 Scatter-Gather List 00:18:14.895 SGL Command Set: Supported 00:18:14.895 SGL Keyed: Supported 00:18:14.895 SGL Bit Bucket Descriptor: Not Supported 00:18:14.895 SGL Metadata Pointer: Not Supported 00:18:14.895 Oversized SGL: Not Supported 00:18:14.895 SGL Metadata Address: Not Supported 00:18:14.895 SGL Offset: Supported 00:18:14.895 Transport SGL Data Block: Not Supported 00:18:14.895 Replay Protected Memory Block: Not Supported 00:18:14.895 00:18:14.895 Firmware Slot Information 00:18:14.895 ========================= 00:18:14.895 Active slot: 1 00:18:14.895 Slot 1 Firmware Revision: 25.01 00:18:14.895 00:18:14.895 00:18:14.895 Commands Supported and Effects 00:18:14.895 ============================== 00:18:14.895 Admin Commands 00:18:14.895 -------------- 00:18:14.895 Get Log Page (02h): Supported 00:18:14.895 Identify (06h): Supported 00:18:14.895 Abort (08h): Supported 00:18:14.895 Set Features (09h): Supported 00:18:14.895 Get Features (0Ah): Supported 00:18:14.895 Asynchronous Event Request (0Ch): Supported 00:18:14.895 Keep Alive (18h): Supported 00:18:14.895 I/O Commands 00:18:14.895 ------------ 00:18:14.895 Flush (00h): Supported LBA-Change 00:18:14.895 Write (01h): Supported LBA-Change 00:18:14.895 Read (02h): Supported 00:18:14.895 Compare (05h): Supported 00:18:14.895 Write Zeroes (08h): Supported LBA-Change 00:18:14.895 Dataset Management (09h): Supported LBA-Change 00:18:14.895 Copy (19h): Supported LBA-Change 00:18:14.895 00:18:14.895 Error Log 00:18:14.895 ========= 00:18:14.895 00:18:14.895 Arbitration 00:18:14.895 =========== 00:18:14.895 Arbitration Burst: 1 00:18:14.895 00:18:14.895 Power Management 00:18:14.895 ================ 00:18:14.895 Number of Power States: 1 00:18:14.895 Current Power State: Power State #0 00:18:14.895 Power State #0: 00:18:14.895 Max Power: 0.00 W 00:18:14.895 Non-Operational State: Operational 00:18:14.895 Entry Latency: Not Reported 00:18:14.895 Exit Latency: Not Reported 00:18:14.895 Relative Read Throughput: 0 00:18:14.895 Relative Read Latency: 0 00:18:14.895 Relative Write Throughput: 0 00:18:14.895 Relative Write Latency: 0 00:18:14.895 Idle Power: Not Reported 00:18:14.895 Active Power: Not Reported 00:18:14.895 Non-Operational Permissive Mode: Not Supported 00:18:14.895 00:18:14.895 Health Information 00:18:14.895 ================== 00:18:14.895 Critical Warnings: 00:18:14.895 Available Spare Space: OK 00:18:14.895 Temperature: OK 00:18:14.895 Device Reliability: OK 00:18:14.895 Read Only: No 00:18:14.895 Volatile Memory Backup: OK 00:18:14.895 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:14.895 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:14.895 Available Spare: 0% 00:18:14.895 Available Spare Threshold: 0% 00:18:14.895 Life Percentage [2024-11-07 10:46:42.339187] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x181b00 00:18:14.895 [2024-11-07 10:46:42.339196] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.895 [2024-11-07 10:46:42.339216] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.895 [2024-11-07 10:46:42.339221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:14.895 [2024-11-07 10:46:42.339228] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x181b00 00:18:14.895 [2024-11-07 10:46:42.339255] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:18:14.895 [2024-11-07 10:46:42.339264] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 36966 doesn't match qid 00:18:14.895 [2024-11-07 10:46:42.339277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32759 cdw0:74ef5190 sqhd:5a40 p:0 m:0 dnr:0 00:18:14.895 [2024-11-07 10:46:42.339284] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 36966 doesn't match qid 00:18:14.895 [2024-11-07 10:46:42.339292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32759 cdw0:74ef5190 sqhd:5a40 p:0 m:0 dnr:0 00:18:14.895 [2024-11-07 10:46:42.339298] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 36966 doesn't match qid 00:18:14.895 [2024-11-07 10:46:42.339305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32759 cdw0:74ef5190 sqhd:5a40 p:0 m:0 dnr:0 00:18:14.895 [2024-11-07 10:46:42.339312] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 36966 doesn't match qid 00:18:14.895 [2024-11-07 10:46:42.339319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32759 cdw0:74ef5190 sqhd:5a40 p:0 m:0 dnr:0 00:18:14.895 [2024-11-07 10:46:42.339327] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x181b00 00:18:14.895 [2024-11-07 10:46:42.339335] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.895 [2024-11-07 10:46:42.339350] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.895 [2024-11-07 10:46:42.339357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:18:14.895 [2024-11-07 10:46:42.339365] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.895 [2024-11-07 10:46:42.339373] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.895 [2024-11-07 10:46:42.339379] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x181b00 00:18:14.895 [2024-11-07 10:46:42.339391] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.895 [2024-11-07 10:46:42.339397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:14.895 [2024-11-07 10:46:42.339403] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:18:14.895 [2024-11-07 10:46:42.339410] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:18:14.895 [2024-11-07 10:46:42.339416] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x181b00 00:18:14.895 [2024-11-07 10:46:42.339424] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.895 [2024-11-07 10:46:42.339431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.895 [2024-11-07 10:46:42.339455] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.895 [2024-11-07 10:46:42.339460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:18:14.895 [2024-11-07 10:46:42.339467] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x181b00 00:18:14.895 [2024-11-07 10:46:42.339476] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.895 [2024-11-07 10:46:42.339484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.895 [2024-11-07 10:46:42.339500] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.895 [2024-11-07 10:46:42.339505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:18:14.895 [2024-11-07 10:46:42.339517] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x181b00 00:18:14.895 [2024-11-07 10:46:42.339526] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.895 [2024-11-07 10:46:42.339533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.895 [2024-11-07 10:46:42.339551] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.895 [2024-11-07 10:46:42.339556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:18:14.895 [2024-11-07 10:46:42.339563] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x181b00 00:18:14.895 [2024-11-07 10:46:42.339571] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.896 [2024-11-07 10:46:42.339579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.896 [2024-11-07 10:46:42.339604] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.896 [2024-11-07 10:46:42.339610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:18:14.896 [2024-11-07 10:46:42.339616] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x181b00 00:18:14.896 [2024-11-07 10:46:42.339626] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.896 [2024-11-07 10:46:42.339634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.896 [2024-11-07 10:46:42.339648] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.896 [2024-11-07 10:46:42.339654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:18:14.896 [2024-11-07 10:46:42.339660] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x181b00 00:18:14.896 [2024-11-07 10:46:42.339669] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.896 [2024-11-07 10:46:42.339677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.896 [2024-11-07 10:46:42.339693] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.896 [2024-11-07 10:46:42.339698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:18:14.896 [2024-11-07 10:46:42.339705] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x181b00 00:18:14.896 [2024-11-07 10:46:42.339713] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.896 [2024-11-07 10:46:42.339721] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.896 [2024-11-07 10:46:42.339743] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.896 [2024-11-07 10:46:42.339748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:18:14.896 [2024-11-07 10:46:42.339754] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x181b00 00:18:14.896 [2024-11-07 10:46:42.339763] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.896 [2024-11-07 10:46:42.339771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.896 [2024-11-07 10:46:42.339788] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.896 [2024-11-07 10:46:42.339794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:18:14.896 [2024-11-07 10:46:42.339800] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x181b00 00:18:14.896 [2024-11-07 10:46:42.339809] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.896 [2024-11-07 10:46:42.339817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.896 [2024-11-07 10:46:42.339836] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.896 [2024-11-07 10:46:42.339842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:18:14.896 [2024-11-07 10:46:42.339848] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x181b00 00:18:14.896 [2024-11-07 10:46:42.339856] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.896 [2024-11-07 10:46:42.339864] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.896 [2024-11-07 10:46:42.339880] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.896 [2024-11-07 10:46:42.339885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:18:14.896 [2024-11-07 10:46:42.339891] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x181b00 00:18:14.896 [2024-11-07 10:46:42.339901] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.896 [2024-11-07 10:46:42.339909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.896 [2024-11-07 10:46:42.339930] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.896 [2024-11-07 10:46:42.339936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:18:14.896 [2024-11-07 10:46:42.339942] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x181b00 00:18:14.896 [2024-11-07 10:46:42.339950] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.896 [2024-11-07 10:46:42.339958] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.896 [2024-11-07 10:46:42.339979] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.896 [2024-11-07 10:46:42.339985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:18:14.896 [2024-11-07 10:46:42.339991] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x181b00 00:18:14.896 [2024-11-07 10:46:42.340000] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.896 [2024-11-07 10:46:42.340007] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.896 [2024-11-07 10:46:42.340025] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.896 [2024-11-07 10:46:42.340030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:18:14.896 [2024-11-07 10:46:42.340036] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x181b00 00:18:14.896 [2024-11-07 10:46:42.340045] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.896 [2024-11-07 10:46:42.340053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.896 [2024-11-07 10:46:42.340074] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.896 [2024-11-07 10:46:42.340079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:18:14.896 [2024-11-07 10:46:42.340086] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x181b00 00:18:14.896 [2024-11-07 10:46:42.340094] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.896 [2024-11-07 10:46:42.340102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.896 [2024-11-07 10:46:42.340120] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.896 [2024-11-07 10:46:42.340125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:18:14.896 [2024-11-07 10:46:42.340131] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x181b00 00:18:14.896 [2024-11-07 10:46:42.340140] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.896 [2024-11-07 10:46:42.340147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.896 [2024-11-07 10:46:42.340167] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.896 [2024-11-07 10:46:42.340172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:18:14.896 [2024-11-07 10:46:42.340180] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x181b00 00:18:14.896 [2024-11-07 10:46:42.340189] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.896 [2024-11-07 10:46:42.340197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.896 [2024-11-07 10:46:42.340216] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.896 [2024-11-07 10:46:42.340221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:18:14.896 [2024-11-07 10:46:42.340228] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x181b00 00:18:14.896 [2024-11-07 10:46:42.340236] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.896 [2024-11-07 10:46:42.340244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.896 [2024-11-07 10:46:42.340259] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.897 [2024-11-07 10:46:42.340265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:18:14.897 [2024-11-07 10:46:42.340271] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x181b00 00:18:14.897 [2024-11-07 10:46:42.340280] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.897 [2024-11-07 10:46:42.340288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.897 [2024-11-07 10:46:42.340307] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.897 [2024-11-07 10:46:42.340312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:18:14.897 [2024-11-07 10:46:42.340319] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x181b00 00:18:14.897 [2024-11-07 10:46:42.340327] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.897 [2024-11-07 10:46:42.340335] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.897 [2024-11-07 10:46:42.340356] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.897 [2024-11-07 10:46:42.340361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:18:14.897 [2024-11-07 10:46:42.340368] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x181b00 00:18:14.897 [2024-11-07 10:46:42.340376] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.897 [2024-11-07 10:46:42.340384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.897 [2024-11-07 10:46:42.340405] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.897 [2024-11-07 10:46:42.340411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:18:14.897 [2024-11-07 10:46:42.340417] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x181b00 00:18:14.897 [2024-11-07 10:46:42.340425] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.897 [2024-11-07 10:46:42.340433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.897 [2024-11-07 10:46:42.340457] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.897 [2024-11-07 10:46:42.340462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:18:14.897 [2024-11-07 10:46:42.340470] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x181b00 00:18:14.897 [2024-11-07 10:46:42.340478] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.897 [2024-11-07 10:46:42.340486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.897 [2024-11-07 10:46:42.340505] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.897 [2024-11-07 10:46:42.340514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:18:14.897 [2024-11-07 10:46:42.340520] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x181b00 00:18:14.897 [2024-11-07 10:46:42.340529] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.897 [2024-11-07 10:46:42.340537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.897 [2024-11-07 10:46:42.340553] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.897 [2024-11-07 10:46:42.340558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:18:14.897 [2024-11-07 10:46:42.340564] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x181b00 00:18:14.897 [2024-11-07 10:46:42.340573] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.897 [2024-11-07 10:46:42.340580] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.897 [2024-11-07 10:46:42.340596] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.897 [2024-11-07 10:46:42.340601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:18:14.897 [2024-11-07 10:46:42.340608] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x181b00 00:18:14.897 [2024-11-07 10:46:42.340616] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.897 [2024-11-07 10:46:42.340624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.897 [2024-11-07 10:46:42.340645] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.897 [2024-11-07 10:46:42.340651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:18:14.897 [2024-11-07 10:46:42.340657] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x181b00 00:18:14.897 [2024-11-07 10:46:42.340666] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.897 [2024-11-07 10:46:42.340673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.897 [2024-11-07 10:46:42.340694] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.897 [2024-11-07 10:46:42.340700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:18:14.897 [2024-11-07 10:46:42.340706] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x181b00 00:18:14.897 [2024-11-07 10:46:42.340715] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.897 [2024-11-07 10:46:42.340722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.897 [2024-11-07 10:46:42.340745] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.897 [2024-11-07 10:46:42.340752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:18:14.897 [2024-11-07 10:46:42.340758] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x181b00 00:18:14.897 [2024-11-07 10:46:42.340767] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.897 [2024-11-07 10:46:42.340775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.897 [2024-11-07 10:46:42.340792] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.897 [2024-11-07 10:46:42.340798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:18:14.897 [2024-11-07 10:46:42.340804] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x181b00 00:18:14.897 [2024-11-07 10:46:42.340813] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.897 [2024-11-07 10:46:42.340820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.897 [2024-11-07 10:46:42.340838] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.897 [2024-11-07 10:46:42.340843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:18:14.897 [2024-11-07 10:46:42.340849] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x181b00 00:18:14.897 [2024-11-07 10:46:42.340858] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.897 [2024-11-07 10:46:42.340866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.897 [2024-11-07 10:46:42.340889] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.897 [2024-11-07 10:46:42.340894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:18:14.897 [2024-11-07 10:46:42.340901] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x181b00 00:18:14.897 [2024-11-07 10:46:42.340909] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.897 [2024-11-07 10:46:42.340917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.897 [2024-11-07 10:46:42.340940] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.897 [2024-11-07 10:46:42.340945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:18:14.897 [2024-11-07 10:46:42.340952] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x181b00 00:18:14.897 [2024-11-07 10:46:42.340960] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.897 [2024-11-07 10:46:42.340968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.897 [2024-11-07 10:46:42.340987] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.897 [2024-11-07 10:46:42.340993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:18:14.897 [2024-11-07 10:46:42.340999] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x181b00 00:18:14.897 [2024-11-07 10:46:42.341008] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.897 [2024-11-07 10:46:42.341015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.897 [2024-11-07 10:46:42.341035] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.897 [2024-11-07 10:46:42.341041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:18:14.897 [2024-11-07 10:46:42.341048] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x181b00 00:18:14.897 [2024-11-07 10:46:42.341056] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.897 [2024-11-07 10:46:42.341064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.897 [2024-11-07 10:46:42.341084] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.897 [2024-11-07 10:46:42.341089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:18:14.897 [2024-11-07 10:46:42.341095] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x181b00 00:18:14.897 [2024-11-07 10:46:42.341104] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.897 [2024-11-07 10:46:42.341112] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.897 [2024-11-07 10:46:42.341129] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.898 [2024-11-07 10:46:42.341134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:18:14.898 [2024-11-07 10:46:42.341141] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x181b00 00:18:14.898 [2024-11-07 10:46:42.341149] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.898 [2024-11-07 10:46:42.341157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.898 [2024-11-07 10:46:42.341180] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.898 [2024-11-07 10:46:42.341185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:18:14.898 [2024-11-07 10:46:42.341192] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x181b00 00:18:14.898 [2024-11-07 10:46:42.341200] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.898 [2024-11-07 10:46:42.341208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.898 [2024-11-07 10:46:42.341231] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.898 [2024-11-07 10:46:42.341236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:18:14.898 [2024-11-07 10:46:42.341243] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x181b00 00:18:14.898 [2024-11-07 10:46:42.341251] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.898 [2024-11-07 10:46:42.341259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.898 [2024-11-07 10:46:42.341277] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.898 [2024-11-07 10:46:42.341282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:18:14.898 [2024-11-07 10:46:42.341288] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x181b00 00:18:14.898 [2024-11-07 10:46:42.341297] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.898 [2024-11-07 10:46:42.341304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.898 [2024-11-07 10:46:42.341327] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.898 [2024-11-07 10:46:42.341333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:18:14.898 [2024-11-07 10:46:42.341339] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x181b00 00:18:14.898 [2024-11-07 10:46:42.341348] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.898 [2024-11-07 10:46:42.341355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.898 [2024-11-07 10:46:42.341371] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.898 [2024-11-07 10:46:42.341376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:18:14.898 [2024-11-07 10:46:42.341382] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x181b00 00:18:14.898 [2024-11-07 10:46:42.341391] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.898 [2024-11-07 10:46:42.341399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.898 [2024-11-07 10:46:42.341420] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.898 [2024-11-07 10:46:42.341425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:18:14.898 [2024-11-07 10:46:42.341431] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x181b00 00:18:14.898 [2024-11-07 10:46:42.341440] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.898 [2024-11-07 10:46:42.341448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.898 [2024-11-07 10:46:42.341463] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.898 [2024-11-07 10:46:42.341469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:18:14.898 [2024-11-07 10:46:42.341475] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x181b00 00:18:14.898 [2024-11-07 10:46:42.341484] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.898 [2024-11-07 10:46:42.341491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.898 [2024-11-07 10:46:42.345511] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.898 [2024-11-07 10:46:42.345519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:18:14.898 [2024-11-07 10:46:42.345525] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x181b00 00:18:14.898 [2024-11-07 10:46:42.345534] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181b00 00:18:14.898 [2024-11-07 10:46:42.345542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:14.898 [2024-11-07 10:46:42.345563] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:14.898 [2024-11-07 10:46:42.345569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0007 p:0 m:0 dnr:0 00:18:14.898 [2024-11-07 10:46:42.345575] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x181b00 00:18:14.898 [2024-11-07 10:46:42.345582] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:18:14.898 Used: 0% 00:18:14.898 Data Units Read: 0 00:18:14.898 Data Units Written: 0 00:18:14.898 Host Read Commands: 0 00:18:14.898 Host Write Commands: 0 00:18:14.898 Controller Busy Time: 0 minutes 00:18:14.898 Power Cycles: 0 00:18:14.898 Power On Hours: 0 hours 00:18:14.898 Unsafe Shutdowns: 0 00:18:14.898 Unrecoverable Media Errors: 0 00:18:14.898 Lifetime Error Log Entries: 0 00:18:14.898 Warning Temperature Time: 0 minutes 00:18:14.898 Critical Temperature Time: 0 minutes 00:18:14.898 00:18:14.898 Number of Queues 00:18:14.898 ================ 00:18:14.898 Number of I/O Submission Queues: 127 00:18:14.898 Number of I/O Completion Queues: 127 00:18:14.898 00:18:14.898 Active Namespaces 00:18:14.898 ================= 00:18:14.898 Namespace ID:1 00:18:14.898 Error Recovery Timeout: Unlimited 00:18:14.898 Command Set Identifier: NVM (00h) 00:18:14.898 Deallocate: Supported 00:18:14.898 Deallocated/Unwritten Error: Not Supported 00:18:14.898 Deallocated Read Value: Unknown 00:18:14.898 Deallocate in Write Zeroes: Not Supported 00:18:14.898 Deallocated Guard Field: 0xFFFF 00:18:14.898 Flush: Supported 00:18:14.898 Reservation: Supported 00:18:14.898 Namespace Sharing Capabilities: Multiple Controllers 00:18:14.898 Size (in LBAs): 131072 (0GiB) 00:18:14.898 Capacity (in LBAs): 131072 (0GiB) 00:18:14.898 Utilization (in LBAs): 131072 (0GiB) 00:18:14.898 NGUID: ABCDEF0123456789ABCDEF0123456789 00:18:14.898 EUI64: ABCDEF0123456789 00:18:14.898 UUID: 03cdb203-c9d5-4841-bf94-ff537845fca1 00:18:14.898 Thin Provisioning: Not Supported 00:18:14.898 Per-NS Atomic Units: Yes 00:18:14.898 Atomic Boundary Size (Normal): 0 00:18:14.898 Atomic Boundary Size (PFail): 0 00:18:14.898 Atomic Boundary Offset: 0 00:18:14.898 Maximum Single Source Range Length: 65535 00:18:14.898 Maximum Copy Length: 65535 00:18:14.898 Maximum Source Range Count: 1 00:18:14.898 NGUID/EUI64 Never Reused: No 00:18:14.898 Namespace Write Protected: No 00:18:14.898 Number of LBA Formats: 1 00:18:14.898 Current LBA Format: LBA Format #00 00:18:14.898 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:14.898 00:18:14.898 10:46:42 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:18:14.898 10:46:42 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:14.898 10:46:42 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.898 10:46:42 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:14.898 10:46:42 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.898 10:46:42 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:18:14.898 10:46:42 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:18:14.898 10:46:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:14.898 10:46:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:18:14.898 10:46:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:14.898 10:46:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:14.898 10:46:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:18:14.898 10:46:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:14.898 10:46:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:14.898 rmmod nvme_rdma 00:18:14.898 rmmod nvme_fabrics 00:18:14.898 10:46:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:14.898 10:46:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:18:14.898 10:46:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:18:14.898 10:46:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 3820278 ']' 00:18:14.898 10:46:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 3820278 00:18:14.898 10:46:42 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 3820278 ']' 00:18:14.898 10:46:42 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 3820278 00:18:14.899 10:46:42 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:18:14.899 10:46:42 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:14.899 10:46:42 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3820278 00:18:14.899 10:46:42 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:14.899 10:46:42 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:14.899 10:46:42 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3820278' 00:18:14.899 killing process with pid 3820278 00:18:14.899 10:46:42 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 3820278 00:18:14.899 10:46:42 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 3820278 00:18:15.158 10:46:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:15.158 10:46:42 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:18:15.158 00:18:15.158 real 0m8.992s 00:18:15.158 user 0m8.987s 00:18:15.158 sys 0m5.707s 00:18:15.158 10:46:42 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:15.158 10:46:42 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:15.158 ************************************ 00:18:15.158 END TEST nvmf_identify 00:18:15.158 ************************************ 00:18:15.158 10:46:42 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:18:15.158 10:46:42 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:15.158 10:46:42 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:15.158 10:46:42 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.158 ************************************ 00:18:15.158 START TEST nvmf_perf 00:18:15.158 ************************************ 00:18:15.158 10:46:42 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:18:15.418 * Looking for test storage... 00:18:15.418 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:15.418 10:46:42 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:15.418 10:46:42 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:18:15.418 10:46:42 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:15.418 10:46:42 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:15.418 10:46:42 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:15.418 10:46:42 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:15.418 10:46:42 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:15.418 10:46:42 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:18:15.418 10:46:42 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:18:15.418 10:46:42 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:18:15.418 10:46:42 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:18:15.418 10:46:42 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:18:15.418 10:46:42 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:18:15.418 10:46:42 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:18:15.418 10:46:42 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:15.418 10:46:42 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:18:15.418 10:46:42 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:18:15.418 10:46:42 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:15.418 10:46:42 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:15.418 10:46:42 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:18:15.418 10:46:42 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:18:15.418 10:46:42 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:15.418 10:46:42 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:18:15.418 10:46:42 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:18:15.418 10:46:42 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:18:15.418 10:46:42 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:18:15.418 10:46:42 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:15.418 10:46:42 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:18:15.418 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:18:15.418 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:15.418 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:15.418 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:18:15.418 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:15.418 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:15.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.418 --rc genhtml_branch_coverage=1 00:18:15.418 --rc genhtml_function_coverage=1 00:18:15.418 --rc genhtml_legend=1 00:18:15.418 --rc geninfo_all_blocks=1 00:18:15.418 --rc geninfo_unexecuted_blocks=1 00:18:15.418 00:18:15.418 ' 00:18:15.418 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:15.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.418 --rc genhtml_branch_coverage=1 00:18:15.418 --rc genhtml_function_coverage=1 00:18:15.418 --rc genhtml_legend=1 00:18:15.418 --rc geninfo_all_blocks=1 00:18:15.418 --rc geninfo_unexecuted_blocks=1 00:18:15.418 00:18:15.418 ' 00:18:15.418 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:15.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.418 --rc genhtml_branch_coverage=1 00:18:15.418 --rc genhtml_function_coverage=1 00:18:15.418 --rc genhtml_legend=1 00:18:15.418 --rc geninfo_all_blocks=1 00:18:15.418 --rc geninfo_unexecuted_blocks=1 00:18:15.418 00:18:15.418 ' 00:18:15.418 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:15.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.418 --rc genhtml_branch_coverage=1 00:18:15.418 --rc genhtml_function_coverage=1 00:18:15.418 --rc genhtml_legend=1 00:18:15.418 --rc geninfo_all_blocks=1 00:18:15.418 --rc geninfo_unexecuted_blocks=1 00:18:15.418 00:18:15.418 ' 00:18:15.418 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:15.418 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:18:15.418 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:15.418 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:15.418 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:15.418 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:15.418 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:15.418 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:15.418 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:15.418 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:15.418 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:15.418 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:15.418 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:15.418 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:15.418 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:15.418 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:15.418 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:15.418 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:15.418 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:15.418 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:18:15.418 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:15.418 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:15.418 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:15.418 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.418 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.418 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.418 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:18:15.418 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.418 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:18:15.418 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:15.418 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:15.418 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:15.418 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:15.418 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:15.418 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:15.418 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:15.418 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:15.418 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:15.418 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:15.418 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:15.418 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:15.419 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:15.419 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:18:15.419 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:18:15.419 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:15.419 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:15.419 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:15.419 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:15.419 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.419 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:15.419 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.419 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:15.419 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:15.419 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:18:15.419 10:46:43 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:23.542 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:23.542 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:18:23.542 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:23.542 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:23.542 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:23.543 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:23.543 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:23.543 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:23.543 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # rdma_device_init 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # uname 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@530 -- # allocate_nic_ips 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:23.543 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:23.543 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:23.543 altname enp217s0f0np0 00:18:23.543 altname ens818f0np0 00:18:23.543 inet 192.168.100.8/24 scope global mlx_0_0 00:18:23.543 valid_lft forever preferred_lft forever 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:23.543 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:23.544 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:23.544 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:23.544 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:23.544 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:23.544 altname enp217s0f1np1 00:18:23.544 altname ens818f1np1 00:18:23.544 inet 192.168.100.9/24 scope global mlx_0_1 00:18:23.544 valid_lft forever preferred_lft forever 00:18:23.544 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:18:23.544 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:23.544 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:23.544 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:18:23.544 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:18:23.544 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:23.544 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:23.544 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:23.544 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:23.544 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:23.544 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:23.544 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:23.544 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:23.544 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:23.544 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:23.544 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:18:23.544 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:23.544 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:23.544 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:23.544 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:23.544 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:23.544 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:23.544 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:18:23.544 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:23.544 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:23.544 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:23.544 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:23.544 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:23.544 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:23.544 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:23.544 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:23.544 10:46:49 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:23.544 10:46:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:23.544 10:46:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:23.544 10:46:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:23.544 10:46:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:18:23.544 192.168.100.9' 00:18:23.544 10:46:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:18:23.544 192.168.100.9' 00:18:23.544 10:46:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # head -n 1 00:18:23.544 10:46:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:23.544 10:46:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:18:23.544 192.168.100.9' 00:18:23.544 10:46:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # tail -n +2 00:18:23.544 10:46:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # head -n 1 00:18:23.544 10:46:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:23.544 10:46:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:18:23.544 10:46:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:23.544 10:46:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:18:23.544 10:46:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:18:23.544 10:46:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:18:23.544 10:46:50 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:18:23.544 10:46:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:23.544 10:46:50 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:23.544 10:46:50 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:23.544 10:46:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=3823952 00:18:23.544 10:46:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:23.544 10:46:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 3823952 00:18:23.544 10:46:50 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 3823952 ']' 00:18:23.544 10:46:50 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.544 10:46:50 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:23.544 10:46:50 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:23.544 10:46:50 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:23.544 10:46:50 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:23.544 [2024-11-07 10:46:50.117469] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:18:23.544 [2024-11-07 10:46:50.117522] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:23.544 [2024-11-07 10:46:50.192232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:23.544 [2024-11-07 10:46:50.233511] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:23.544 [2024-11-07 10:46:50.233566] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:23.544 [2024-11-07 10:46:50.233576] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:23.544 [2024-11-07 10:46:50.233584] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:23.544 [2024-11-07 10:46:50.233591] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:23.544 [2024-11-07 10:46:50.235361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:23.544 [2024-11-07 10:46:50.235455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:23.544 [2024-11-07 10:46:50.235550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:23.544 [2024-11-07 10:46:50.235552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.544 10:46:50 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:23.544 10:46:50 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:18:23.544 10:46:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:23.544 10:46:50 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:23.544 10:46:50 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:23.544 10:46:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:23.544 10:46:50 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:18:23.544 10:46:50 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:18:26.077 10:46:53 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:18:26.078 10:46:53 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:18:26.078 10:46:53 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:18:26.078 10:46:53 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:26.336 10:46:53 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:18:26.336 10:46:53 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:18:26.336 10:46:53 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:18:26.336 10:46:53 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:18:26.336 10:46:53 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:18:26.595 [2024-11-07 10:46:54.037814] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:18:26.595 [2024-11-07 10:46:54.059163] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1635a70/0x1647fd0) succeed. 00:18:26.595 [2024-11-07 10:46:54.068642] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1638110/0x16c8040) succeed. 00:18:26.595 10:46:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:26.854 10:46:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:18:26.854 10:46:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:27.112 10:46:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:18:27.112 10:46:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:18:27.371 10:46:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:27.371 [2024-11-07 10:46:54.964859] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:27.371 10:46:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:18:27.630 10:46:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:18:27.630 10:46:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:18:27.630 10:46:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:18:27.630 10:46:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:18:29.007 Initializing NVMe Controllers 00:18:29.007 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:18:29.007 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:18:29.007 Initialization complete. Launching workers. 00:18:29.007 ======================================================== 00:18:29.007 Latency(us) 00:18:29.007 Device Information : IOPS MiB/s Average min max 00:18:29.007 PCIE (0000:d8:00.0) NSID 1 from core 0: 101733.42 397.40 314.11 33.92 4247.24 00:18:29.007 ======================================================== 00:18:29.007 Total : 101733.42 397.40 314.11 33.92 4247.24 00:18:29.007 00:18:29.007 10:46:56 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:18:32.294 Initializing NVMe Controllers 00:18:32.294 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:18:32.294 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:32.294 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:32.294 Initialization complete. Launching workers. 00:18:32.294 ======================================================== 00:18:32.294 Latency(us) 00:18:32.294 Device Information : IOPS MiB/s Average min max 00:18:32.294 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6706.99 26.20 148.76 50.11 4091.97 00:18:32.294 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5202.99 20.32 191.81 68.11 4110.17 00:18:32.294 ======================================================== 00:18:32.294 Total : 11909.99 46.52 167.56 50.11 4110.17 00:18:32.294 00:18:32.294 10:46:59 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:18:35.582 Initializing NVMe Controllers 00:18:35.582 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:18:35.582 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:35.582 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:35.582 Initialization complete. Launching workers. 00:18:35.582 ======================================================== 00:18:35.582 Latency(us) 00:18:35.582 Device Information : IOPS MiB/s Average min max 00:18:35.582 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18386.00 71.82 1739.05 488.94 5534.34 00:18:35.582 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4032.00 15.75 7965.18 6164.25 8197.37 00:18:35.582 ======================================================== 00:18:35.582 Total : 22418.00 87.57 2858.85 488.94 8197.37 00:18:35.582 00:18:35.840 10:47:03 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:18:35.840 10:47:03 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:18:40.031 Initializing NVMe Controllers 00:18:40.031 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:18:40.031 Controller IO queue size 128, less than required. 00:18:40.031 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:40.031 Controller IO queue size 128, less than required. 00:18:40.031 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:40.031 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:40.031 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:40.031 Initialization complete. Launching workers. 00:18:40.031 ======================================================== 00:18:40.031 Latency(us) 00:18:40.031 Device Information : IOPS MiB/s Average min max 00:18:40.031 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4003.90 1000.98 32179.32 15371.64 87925.84 00:18:40.031 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4041.87 1010.47 31279.50 15098.32 54330.08 00:18:40.031 ======================================================== 00:18:40.031 Total : 8045.77 2011.44 31727.29 15098.32 87925.84 00:18:40.031 00:18:40.031 10:47:07 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:18:40.601 No valid NVMe controllers or AIO or URING devices found 00:18:40.601 Initializing NVMe Controllers 00:18:40.601 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:18:40.601 Controller IO queue size 128, less than required. 00:18:40.601 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:40.601 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:18:40.601 Controller IO queue size 128, less than required. 00:18:40.601 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:40.601 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:18:40.601 WARNING: Some requested NVMe devices were skipped 00:18:40.601 10:47:08 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:18:44.795 Initializing NVMe Controllers 00:18:44.795 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:18:44.795 Controller IO queue size 128, less than required. 00:18:44.795 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:44.795 Controller IO queue size 128, less than required. 00:18:44.795 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:44.795 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:44.795 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:44.795 Initialization complete. Launching workers. 00:18:44.795 00:18:44.795 ==================== 00:18:44.795 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:18:44.795 RDMA transport: 00:18:44.795 dev name: mlx5_0 00:18:44.795 polls: 405511 00:18:44.795 idle_polls: 401880 00:18:44.795 completions: 45290 00:18:44.795 queued_requests: 1 00:18:44.795 total_send_wrs: 22645 00:18:44.795 send_doorbell_updates: 3429 00:18:44.795 total_recv_wrs: 22772 00:18:44.795 recv_doorbell_updates: 3431 00:18:44.795 --------------------------------- 00:18:44.795 00:18:44.795 ==================== 00:18:44.795 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:18:44.795 RDMA transport: 00:18:44.795 dev name: mlx5_0 00:18:44.795 polls: 411475 00:18:44.795 idle_polls: 411206 00:18:44.795 completions: 19842 00:18:44.795 queued_requests: 1 00:18:44.795 total_send_wrs: 9921 00:18:44.795 send_doorbell_updates: 251 00:18:44.795 total_recv_wrs: 10048 00:18:44.795 recv_doorbell_updates: 253 00:18:44.795 --------------------------------- 00:18:44.795 ======================================================== 00:18:44.795 Latency(us) 00:18:44.795 Device Information : IOPS MiB/s Average min max 00:18:44.795 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5654.51 1413.63 22618.66 9221.31 66487.73 00:18:44.795 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2477.16 619.29 51534.32 31662.29 77615.00 00:18:44.795 ======================================================== 00:18:44.795 Total : 8131.67 2032.92 31427.26 9221.31 77615.00 00:18:44.795 00:18:45.054 10:47:12 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:18:45.054 10:47:12 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:45.054 10:47:12 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:18:45.054 10:47:12 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:18:45.054 10:47:12 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:18:45.054 10:47:12 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:45.054 10:47:12 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:18:45.054 10:47:12 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:45.054 10:47:12 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:45.054 10:47:12 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:18:45.054 10:47:12 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:45.054 10:47:12 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:45.054 rmmod nvme_rdma 00:18:45.054 rmmod nvme_fabrics 00:18:45.313 10:47:12 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:45.313 10:47:12 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:18:45.313 10:47:12 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:18:45.313 10:47:12 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 3823952 ']' 00:18:45.313 10:47:12 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 3823952 00:18:45.313 10:47:12 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 3823952 ']' 00:18:45.313 10:47:12 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 3823952 00:18:45.313 10:47:12 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:18:45.313 10:47:12 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:45.313 10:47:12 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3823952 00:18:45.313 10:47:12 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:45.313 10:47:12 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:45.313 10:47:12 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3823952' 00:18:45.313 killing process with pid 3823952 00:18:45.313 10:47:12 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 3823952 00:18:45.313 10:47:12 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 3823952 00:18:47.847 10:47:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:47.847 10:47:15 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:18:47.847 00:18:47.847 real 0m32.407s 00:18:47.847 user 1m42.064s 00:18:47.847 sys 0m6.924s 00:18:47.847 10:47:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:47.847 10:47:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:47.847 ************************************ 00:18:47.847 END TEST nvmf_perf 00:18:47.847 ************************************ 00:18:47.847 10:47:15 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:18:47.847 10:47:15 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:47.847 10:47:15 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:47.847 10:47:15 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.847 ************************************ 00:18:47.847 START TEST nvmf_fio_host 00:18:47.847 ************************************ 00:18:47.847 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:18:47.847 * Looking for test storage... 00:18:47.847 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:47.847 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:47.847 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:18:47.847 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:47.847 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:47.847 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:47.847 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:47.847 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:47.847 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:18:47.847 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:18:47.847 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:18:47.847 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:18:47.847 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:18:47.847 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:18:47.847 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:18:47.847 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:47.847 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:18:47.847 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:18:47.847 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:47.847 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:47.847 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:18:47.847 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:18:47.847 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:47.847 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:18:47.847 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:18:47.847 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:18:47.847 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:18:47.847 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:47.847 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:18:47.847 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:18:47.847 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:47.847 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:47.847 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:18:47.847 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:47.847 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:47.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.847 --rc genhtml_branch_coverage=1 00:18:47.847 --rc genhtml_function_coverage=1 00:18:47.847 --rc genhtml_legend=1 00:18:47.847 --rc geninfo_all_blocks=1 00:18:47.847 --rc geninfo_unexecuted_blocks=1 00:18:47.847 00:18:47.847 ' 00:18:47.847 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:47.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.847 --rc genhtml_branch_coverage=1 00:18:47.847 --rc genhtml_function_coverage=1 00:18:47.847 --rc genhtml_legend=1 00:18:47.847 --rc geninfo_all_blocks=1 00:18:47.847 --rc geninfo_unexecuted_blocks=1 00:18:47.847 00:18:47.847 ' 00:18:47.847 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:47.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.847 --rc genhtml_branch_coverage=1 00:18:47.847 --rc genhtml_function_coverage=1 00:18:47.847 --rc genhtml_legend=1 00:18:47.847 --rc geninfo_all_blocks=1 00:18:47.847 --rc geninfo_unexecuted_blocks=1 00:18:47.847 00:18:47.847 ' 00:18:47.847 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:47.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.847 --rc genhtml_branch_coverage=1 00:18:47.847 --rc genhtml_function_coverage=1 00:18:47.847 --rc genhtml_legend=1 00:18:47.847 --rc geninfo_all_blocks=1 00:18:47.847 --rc geninfo_unexecuted_blocks=1 00:18:47.847 00:18:47.847 ' 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:47.848 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:18:47.848 10:47:15 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.526 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:54.526 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:18:54.526 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:54.526 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:54.526 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:54.526 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:54.526 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:54.526 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:18:54.526 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:54.526 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:18:54.526 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:18:54.526 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:18:54.526 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:18:54.526 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:18:54.526 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:18:54.526 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:54.526 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:54.526 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:54.526 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:54.526 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:54.526 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:54.526 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:54.526 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:54.526 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:54.526 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:54.526 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:54.527 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:54.527 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:54.527 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:54.527 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # rdma_device_init 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # uname 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@530 -- # allocate_nic_ips 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:54.527 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:54.527 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:54.527 altname enp217s0f0np0 00:18:54.527 altname ens818f0np0 00:18:54.527 inet 192.168.100.8/24 scope global mlx_0_0 00:18:54.527 valid_lft forever preferred_lft forever 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:54.527 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:54.527 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:54.527 altname enp217s0f1np1 00:18:54.527 altname ens818f1np1 00:18:54.527 inet 192.168.100.9/24 scope global mlx_0_1 00:18:54.527 valid_lft forever preferred_lft forever 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:54.527 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:54.528 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:54.528 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:18:54.528 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:54.528 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:54.528 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:54.528 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:54.528 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:54.528 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:54.528 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:18:54.528 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:54.528 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:54.528 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:54.528 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:54.528 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:54.528 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:54.528 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:54.528 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:54.528 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:54.528 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:54.528 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:54.528 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:54.528 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:18:54.528 192.168.100.9' 00:18:54.528 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:18:54.528 192.168.100.9' 00:18:54.528 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # head -n 1 00:18:54.528 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:54.528 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:18:54.528 192.168.100.9' 00:18:54.528 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # tail -n +2 00:18:54.528 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # head -n 1 00:18:54.528 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:54.528 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:18:54.528 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:54.528 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:18:54.528 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:18:54.528 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:18:54.528 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:18:54.528 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:18:54.528 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:54.528 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.528 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3831277 00:18:54.528 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:54.528 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:54.528 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3831277 00:18:54.528 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 3831277 ']' 00:18:54.528 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:54.528 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:54.528 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:54.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:54.528 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:54.528 10:47:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.528 [2024-11-07 10:47:21.854161] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:18:54.528 [2024-11-07 10:47:21.854214] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:54.528 [2024-11-07 10:47:21.932781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:54.528 [2024-11-07 10:47:21.974642] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:54.528 [2024-11-07 10:47:21.974684] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:54.528 [2024-11-07 10:47:21.974693] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:54.528 [2024-11-07 10:47:21.974701] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:54.528 [2024-11-07 10:47:21.974709] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:54.528 [2024-11-07 10:47:21.976331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:54.528 [2024-11-07 10:47:21.976369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:54.528 [2024-11-07 10:47:21.976456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:54.528 [2024-11-07 10:47:21.976459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.096 10:47:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:55.096 10:47:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:18:55.096 10:47:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:55.355 [2024-11-07 10:47:22.894033] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xdebdf0/0xdf02e0) succeed. 00:18:55.355 [2024-11-07 10:47:22.903387] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xded480/0xe31980) succeed. 00:18:55.614 10:47:23 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:18:55.614 10:47:23 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:55.614 10:47:23 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.614 10:47:23 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:55.873 Malloc1 00:18:55.873 10:47:23 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:55.873 10:47:23 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:56.131 10:47:23 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:56.390 [2024-11-07 10:47:23.867762] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:56.390 10:47:23 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:18:56.649 10:47:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:18:56.649 10:47:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:18:56.649 10:47:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:18:56.649 10:47:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:18:56.649 10:47:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:56.649 10:47:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:18:56.650 10:47:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:18:56.650 10:47:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:18:56.650 10:47:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:18:56.650 10:47:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:18:56.650 10:47:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:18:56.650 10:47:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:18:56.650 10:47:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:18:56.650 10:47:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:18:56.650 10:47:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:18:56.650 10:47:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:18:56.650 10:47:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:18:56.650 10:47:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:18:56.650 10:47:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:18:56.650 10:47:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:18:56.650 10:47:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:18:56.650 10:47:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:18:56.650 10:47:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:18:56.909 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:56.909 fio-3.35 00:18:56.909 Starting 1 thread 00:18:59.473 00:18:59.474 test: (groupid=0, jobs=1): err= 0: pid=3831947: Thu Nov 7 10:47:26 2024 00:18:59.474 read: IOPS=18.0k, BW=70.2MiB/s (73.6MB/s)(141MiB/2004msec) 00:18:59.474 slat (nsec): min=1341, max=29834, avg=1447.68, stdev=382.74 00:18:59.474 clat (usec): min=1752, max=6493, avg=3535.32, stdev=76.51 00:18:59.474 lat (usec): min=1770, max=6494, avg=3536.77, stdev=76.41 00:18:59.474 clat percentiles (usec): 00:18:59.474 | 1.00th=[ 3490], 5.00th=[ 3523], 10.00th=[ 3523], 20.00th=[ 3523], 00:18:59.474 | 30.00th=[ 3523], 40.00th=[ 3523], 50.00th=[ 3523], 60.00th=[ 3523], 00:18:59.474 | 70.00th=[ 3556], 80.00th=[ 3556], 90.00th=[ 3556], 95.00th=[ 3556], 00:18:59.474 | 99.00th=[ 3589], 99.50th=[ 3589], 99.90th=[ 4293], 99.95th=[ 5145], 00:18:59.474 | 99.99th=[ 6456] 00:18:59.474 bw ( KiB/s): min=70552, max=72664, per=100.00%, avg=71942.00, stdev=953.99, samples=4 00:18:59.474 iops : min=17638, max=18166, avg=17985.50, stdev=238.50, samples=4 00:18:59.474 write: IOPS=18.0k, BW=70.3MiB/s (73.7MB/s)(141MiB/2004msec); 0 zone resets 00:18:59.474 slat (nsec): min=1380, max=17265, avg=1526.96, stdev=391.32 00:18:59.474 clat (usec): min=1778, max=6487, avg=3534.47, stdev=85.87 00:18:59.474 lat (usec): min=1788, max=6488, avg=3535.99, stdev=85.80 00:18:59.474 clat percentiles (usec): 00:18:59.474 | 1.00th=[ 3490], 5.00th=[ 3523], 10.00th=[ 3523], 20.00th=[ 3523], 00:18:59.474 | 30.00th=[ 3523], 40.00th=[ 3523], 50.00th=[ 3523], 60.00th=[ 3523], 00:18:59.474 | 70.00th=[ 3556], 80.00th=[ 3556], 90.00th=[ 3556], 95.00th=[ 3556], 00:18:59.474 | 99.00th=[ 3589], 99.50th=[ 3621], 99.90th=[ 5080], 99.95th=[ 5997], 00:18:59.474 | 99.99th=[ 6456] 00:18:59.474 bw ( KiB/s): min=70568, max=72632, per=100.00%, avg=71992.00, stdev=957.93, samples=4 00:18:59.474 iops : min=17642, max=18158, avg=17998.00, stdev=239.48, samples=4 00:18:59.474 lat (msec) : 2=0.01%, 4=99.85%, 10=0.14% 00:18:59.474 cpu : usr=99.40%, sys=0.20%, ctx=17, majf=0, minf=3 00:18:59.474 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:59.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.474 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:59.474 issued rwts: total=36028,36066,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.474 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:59.474 00:18:59.474 Run status group 0 (all jobs): 00:18:59.474 READ: bw=70.2MiB/s (73.6MB/s), 70.2MiB/s-70.2MiB/s (73.6MB/s-73.6MB/s), io=141MiB (148MB), run=2004-2004msec 00:18:59.474 WRITE: bw=70.3MiB/s (73.7MB/s), 70.3MiB/s-70.3MiB/s (73.7MB/s-73.7MB/s), io=141MiB (148MB), run=2004-2004msec 00:18:59.474 10:47:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:18:59.474 10:47:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:18:59.474 10:47:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:18:59.474 10:47:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:59.474 10:47:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:18:59.474 10:47:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:18:59.474 10:47:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:18:59.474 10:47:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:18:59.474 10:47:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:18:59.474 10:47:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:18:59.474 10:47:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:18:59.474 10:47:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:18:59.474 10:47:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:18:59.474 10:47:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:18:59.474 10:47:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:18:59.474 10:47:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:18:59.474 10:47:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:18:59.474 10:47:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:18:59.474 10:47:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:18:59.474 10:47:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:18:59.474 10:47:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:18:59.474 10:47:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:18:59.735 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:18:59.735 fio-3.35 00:18:59.735 Starting 1 thread 00:19:02.261 00:19:02.261 test: (groupid=0, jobs=1): err= 0: pid=3832397: Thu Nov 7 10:47:29 2024 00:19:02.262 read: IOPS=14.7k, BW=229MiB/s (240MB/s)(448MiB/1955msec) 00:19:02.262 slat (nsec): min=2299, max=51465, avg=2643.24, stdev=979.10 00:19:02.262 clat (usec): min=467, max=8836, avg=1544.18, stdev=1176.76 00:19:02.262 lat (usec): min=470, max=8856, avg=1546.82, stdev=1177.08 00:19:02.262 clat percentiles (usec): 00:19:02.262 | 1.00th=[ 685], 5.00th=[ 783], 10.00th=[ 840], 20.00th=[ 914], 00:19:02.262 | 30.00th=[ 988], 40.00th=[ 1074], 50.00th=[ 1172], 60.00th=[ 1287], 00:19:02.262 | 70.00th=[ 1418], 80.00th=[ 1582], 90.00th=[ 2802], 95.00th=[ 4817], 00:19:02.262 | 99.00th=[ 6194], 99.50th=[ 6718], 99.90th=[ 7308], 99.95th=[ 7570], 00:19:02.262 | 99.99th=[ 8848] 00:19:02.262 bw ( KiB/s): min=112160, max=115360, per=48.56%, avg=113832.00, stdev=1666.95, samples=4 00:19:02.262 iops : min= 7010, max= 7210, avg=7114.50, stdev=104.18, samples=4 00:19:02.262 write: IOPS=8169, BW=128MiB/s (134MB/s)(231MiB/1807msec); 0 zone resets 00:19:02.262 slat (usec): min=26, max=144, avg=28.96, stdev= 5.71 00:19:02.262 clat (usec): min=4553, max=18843, avg=12612.69, stdev=1716.48 00:19:02.262 lat (usec): min=4579, max=18870, avg=12641.65, stdev=1716.08 00:19:02.262 clat percentiles (usec): 00:19:02.262 | 1.00th=[ 7767], 5.00th=[ 9896], 10.00th=[10683], 20.00th=[11338], 00:19:02.262 | 30.00th=[11863], 40.00th=[12256], 50.00th=[12649], 60.00th=[12911], 00:19:02.262 | 70.00th=[13435], 80.00th=[13960], 90.00th=[14746], 95.00th=[15533], 00:19:02.262 | 99.00th=[16581], 99.50th=[16909], 99.90th=[17957], 99.95th=[18220], 00:19:02.262 | 99.99th=[18744] 00:19:02.262 bw ( KiB/s): min=114304, max=119776, per=90.35%, avg=118104.00, stdev=2558.05, samples=4 00:19:02.262 iops : min= 7144, max= 7486, avg=7381.50, stdev=159.88, samples=4 00:19:02.262 lat (usec) : 500=0.01%, 750=2.22%, 1000=18.83% 00:19:02.262 lat (msec) : 2=37.43%, 4=2.30%, 10=7.16%, 20=32.05% 00:19:02.262 cpu : usr=96.02%, sys=2.19%, ctx=191, majf=0, minf=3 00:19:02.262 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:19:02.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.262 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:02.262 issued rwts: total=28645,14763,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:02.262 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:02.262 00:19:02.262 Run status group 0 (all jobs): 00:19:02.262 READ: bw=229MiB/s (240MB/s), 229MiB/s-229MiB/s (240MB/s-240MB/s), io=448MiB (469MB), run=1955-1955msec 00:19:02.262 WRITE: bw=128MiB/s (134MB/s), 128MiB/s-128MiB/s (134MB/s-134MB/s), io=231MiB (242MB), run=1807-1807msec 00:19:02.262 10:47:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:02.262 10:47:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:19:02.262 10:47:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:19:02.262 10:47:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:19:02.262 10:47:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:19:02.262 10:47:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:02.262 10:47:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:19:02.262 10:47:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:02.262 10:47:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:02.262 10:47:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:19:02.262 10:47:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:02.262 10:47:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:02.262 rmmod nvme_rdma 00:19:02.262 rmmod nvme_fabrics 00:19:02.262 10:47:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:02.262 10:47:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:19:02.262 10:47:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:19:02.262 10:47:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 3831277 ']' 00:19:02.262 10:47:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 3831277 00:19:02.262 10:47:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 3831277 ']' 00:19:02.262 10:47:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 3831277 00:19:02.262 10:47:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:19:02.262 10:47:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:02.262 10:47:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3831277 00:19:02.262 10:47:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:02.262 10:47:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:02.262 10:47:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3831277' 00:19:02.262 killing process with pid 3831277 00:19:02.262 10:47:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 3831277 00:19:02.262 10:47:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 3831277 00:19:02.519 10:47:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:02.519 10:47:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:19:02.519 00:19:02.519 real 0m14.902s 00:19:02.519 user 0m56.565s 00:19:02.519 sys 0m6.030s 00:19:02.519 10:47:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:02.519 10:47:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.519 ************************************ 00:19:02.519 END TEST nvmf_fio_host 00:19:02.519 ************************************ 00:19:02.777 10:47:30 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:19:02.777 10:47:30 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:02.777 10:47:30 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:02.777 10:47:30 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.777 ************************************ 00:19:02.777 START TEST nvmf_failover 00:19:02.777 ************************************ 00:19:02.777 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:19:02.777 * Looking for test storage... 00:19:02.777 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:02.777 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:02.777 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:19:02.777 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:02.777 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:02.777 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:02.777 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:02.777 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:02.777 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:19:02.777 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:19:02.777 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:19:02.777 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:19:02.777 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:19:02.777 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:19:02.777 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:19:02.777 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:02.777 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:19:02.777 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:19:02.777 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:02.777 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:02.777 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:19:02.777 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:19:02.777 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:02.777 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:19:02.777 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:19:02.777 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:19:02.777 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:19:02.777 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:02.777 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:19:02.777 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:19:02.777 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:02.777 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:02.777 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:19:02.777 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:02.777 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:02.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.777 --rc genhtml_branch_coverage=1 00:19:02.777 --rc genhtml_function_coverage=1 00:19:02.777 --rc genhtml_legend=1 00:19:02.777 --rc geninfo_all_blocks=1 00:19:02.777 --rc geninfo_unexecuted_blocks=1 00:19:02.777 00:19:02.777 ' 00:19:02.777 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:02.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.777 --rc genhtml_branch_coverage=1 00:19:02.777 --rc genhtml_function_coverage=1 00:19:02.777 --rc genhtml_legend=1 00:19:02.777 --rc geninfo_all_blocks=1 00:19:02.777 --rc geninfo_unexecuted_blocks=1 00:19:02.777 00:19:02.777 ' 00:19:02.777 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:02.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.778 --rc genhtml_branch_coverage=1 00:19:02.778 --rc genhtml_function_coverage=1 00:19:02.778 --rc genhtml_legend=1 00:19:02.778 --rc geninfo_all_blocks=1 00:19:02.778 --rc geninfo_unexecuted_blocks=1 00:19:02.778 00:19:02.778 ' 00:19:02.778 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:02.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.778 --rc genhtml_branch_coverage=1 00:19:02.778 --rc genhtml_function_coverage=1 00:19:02.778 --rc genhtml_legend=1 00:19:02.778 --rc geninfo_all_blocks=1 00:19:02.778 --rc geninfo_unexecuted_blocks=1 00:19:02.778 00:19:02.778 ' 00:19:02.778 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:02.778 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:19:02.778 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:02.778 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:02.778 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:02.778 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:02.778 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:02.778 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:02.778 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:02.778 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:02.778 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:02.778 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:02.778 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:02.778 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:02.778 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:02.778 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:02.778 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:02.778 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:02.778 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:02.778 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:19:02.778 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:02.778 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:02.778 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:02.778 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.778 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.778 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.778 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:19:02.778 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.778 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:19:02.778 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:02.778 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:02.778 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:02.778 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:02.778 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:02.778 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:03.036 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:03.036 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:03.036 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:03.036 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:03.036 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:03.036 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:03.036 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:03.036 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:03.036 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:19:03.036 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:19:03.036 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:03.036 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:03.036 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:03.036 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:03.036 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:03.036 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:03.036 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:03.036 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:03.036 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:03.036 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:19:03.036 10:47:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:09.598 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:09.598 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:09.598 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:09.599 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:09.599 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # rdma_device_init 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # uname 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:09.599 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:09.599 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:09.599 altname enp217s0f0np0 00:19:09.599 altname ens818f0np0 00:19:09.599 inet 192.168.100.8/24 scope global mlx_0_0 00:19:09.599 valid_lft forever preferred_lft forever 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:09.599 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:09.599 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:09.599 altname enp217s0f1np1 00:19:09.599 altname ens818f1np1 00:19:09.599 inet 192.168.100.9/24 scope global mlx_0_1 00:19:09.599 valid_lft forever preferred_lft forever 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:09.599 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:09.858 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:09.858 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:09.858 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:09.858 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:09.858 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:09.858 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:09.858 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:09.858 192.168.100.9' 00:19:09.858 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:09.858 192.168.100.9' 00:19:09.858 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # head -n 1 00:19:09.858 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:09.858 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:09.858 192.168.100.9' 00:19:09.858 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # tail -n +2 00:19:09.858 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # head -n 1 00:19:09.858 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:09.858 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:09.858 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:09.858 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:09.858 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:09.858 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:09.858 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:19:09.858 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:09.859 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:09.859 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:09.859 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=3836156 00:19:09.859 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:09.859 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 3836156 00:19:09.859 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 3836156 ']' 00:19:09.859 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:09.859 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:09.859 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:09.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:09.859 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:09.859 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:09.859 [2024-11-07 10:47:37.382802] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:19:09.859 [2024-11-07 10:47:37.382850] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:09.859 [2024-11-07 10:47:37.458574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:09.859 [2024-11-07 10:47:37.499233] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:09.859 [2024-11-07 10:47:37.499272] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:09.859 [2024-11-07 10:47:37.499281] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:09.859 [2024-11-07 10:47:37.499290] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:09.859 [2024-11-07 10:47:37.499297] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:09.859 [2024-11-07 10:47:37.500893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:09.859 [2024-11-07 10:47:37.500975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:09.859 [2024-11-07 10:47:37.500977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:10.117 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:10.117 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:19:10.117 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:10.117 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:10.117 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:10.117 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:10.117 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:10.375 [2024-11-07 10:47:37.835110] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x232a570/0x232ea60) succeed. 00:19:10.375 [2024-11-07 10:47:37.844272] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x232bb60/0x2370100) succeed. 00:19:10.375 10:47:37 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:10.632 Malloc0 00:19:10.632 10:47:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:10.890 10:47:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:10.890 10:47:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:11.148 [2024-11-07 10:47:38.723338] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:11.148 10:47:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:19:11.406 [2024-11-07 10:47:38.927738] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:19:11.406 10:47:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:19:11.666 [2024-11-07 10:47:39.132455] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:19:11.666 10:47:39 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3836542 00:19:11.666 10:47:39 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:19:11.666 10:47:39 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:11.666 10:47:39 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3836542 /var/tmp/bdevperf.sock 00:19:11.666 10:47:39 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 3836542 ']' 00:19:11.666 10:47:39 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:11.666 10:47:39 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:11.666 10:47:39 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:11.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:11.666 10:47:39 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:11.666 10:47:39 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:11.924 10:47:39 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:11.924 10:47:39 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:19:11.924 10:47:39 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:12.180 NVMe0n1 00:19:12.180 10:47:39 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:12.437 00:19:12.437 10:47:39 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:12.437 10:47:39 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3836557 00:19:12.437 10:47:39 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:19:13.370 10:47:40 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:13.628 10:47:41 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:19:16.908 10:47:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:16.908 00:19:16.908 10:47:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:19:17.165 10:47:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:19:20.446 10:47:47 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:20.446 [2024-11-07 10:47:47.756953] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:20.446 10:47:47 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:19:21.380 10:47:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:19:21.380 10:47:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3836557 00:19:27.946 { 00:19:27.946 "results": [ 00:19:27.946 { 00:19:27.946 "job": "NVMe0n1", 00:19:27.946 "core_mask": "0x1", 00:19:27.946 "workload": "verify", 00:19:27.946 "status": "finished", 00:19:27.946 "verify_range": { 00:19:27.946 "start": 0, 00:19:27.946 "length": 16384 00:19:27.946 }, 00:19:27.946 "queue_depth": 128, 00:19:27.946 "io_size": 4096, 00:19:27.946 "runtime": 15.005188, 00:19:27.946 "iops": 14425.01086957391, 00:19:27.946 "mibps": 56.34769870927309, 00:19:27.946 "io_failed": 4821, 00:19:27.946 "io_timeout": 0, 00:19:27.946 "avg_latency_us": 8657.840693231377, 00:19:27.946 "min_latency_us": 439.0912, 00:19:27.946 "max_latency_us": 1046898.2784 00:19:27.946 } 00:19:27.946 ], 00:19:27.946 "core_count": 1 00:19:27.946 } 00:19:27.946 10:47:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3836542 00:19:27.946 10:47:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 3836542 ']' 00:19:27.946 10:47:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 3836542 00:19:27.946 10:47:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:19:27.946 10:47:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:27.946 10:47:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3836542 00:19:27.946 10:47:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:27.946 10:47:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:27.946 10:47:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3836542' 00:19:27.946 killing process with pid 3836542 00:19:27.946 10:47:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 3836542 00:19:27.946 10:47:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 3836542 00:19:27.946 10:47:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:19:27.946 [2024-11-07 10:47:39.211556] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:19:27.946 [2024-11-07 10:47:39.211613] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3836542 ] 00:19:27.946 [2024-11-07 10:47:39.285816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.946 [2024-11-07 10:47:39.325670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.946 Running I/O for 15 seconds... 00:19:27.946 18176.00 IOPS, 71.00 MiB/s [2024-11-07T09:47:55.617Z] 9793.00 IOPS, 38.25 MiB/s [2024-11-07T09:47:55.617Z] [2024-11-07 10:47:42.104039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fa000 len:0x1000 key:0x182500 00:19:27.946 [2024-11-07 10:47:42.104078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.946 [2024-11-07 10:47:42.104096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x182500 00:19:27.946 [2024-11-07 10:47:42.104107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.946 [2024-11-07 10:47:42.104118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:25632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f6000 len:0x1000 key:0x182500 00:19:27.946 [2024-11-07 10:47:42.104128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.946 [2024-11-07 10:47:42.104139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f4000 len:0x1000 key:0x182500 00:19:27.946 [2024-11-07 10:47:42.104148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.946 [2024-11-07 10:47:42.104158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f2000 len:0x1000 key:0x182500 00:19:27.946 [2024-11-07 10:47:42.104167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.946 [2024-11-07 10:47:42.104178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:25656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f0000 len:0x1000 key:0x182500 00:19:27.946 [2024-11-07 10:47:42.104187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.946 [2024-11-07 10:47:42.104198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:25664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ee000 len:0x1000 key:0x182500 00:19:27.946 [2024-11-07 10:47:42.104207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.946 [2024-11-07 10:47:42.104217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:25672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ec000 len:0x1000 key:0x182500 00:19:27.946 [2024-11-07 10:47:42.104226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.946 [2024-11-07 10:47:42.104236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ea000 len:0x1000 key:0x182500 00:19:27.946 [2024-11-07 10:47:42.104245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.946 [2024-11-07 10:47:42.104256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:25688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e8000 len:0x1000 key:0x182500 00:19:27.946 [2024-11-07 10:47:42.104270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.946 [2024-11-07 10:47:42.104281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:25696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e6000 len:0x1000 key:0x182500 00:19:27.946 [2024-11-07 10:47:42.104290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.946 [2024-11-07 10:47:42.104301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:25704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e4000 len:0x1000 key:0x182500 00:19:27.946 [2024-11-07 10:47:42.104310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.946 [2024-11-07 10:47:42.104321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:25712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e2000 len:0x1000 key:0x182500 00:19:27.946 [2024-11-07 10:47:42.104330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.946 [2024-11-07 10:47:42.104341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:25720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e0000 len:0x1000 key:0x182500 00:19:27.946 [2024-11-07 10:47:42.104350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.946 [2024-11-07 10:47:42.104360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043de000 len:0x1000 key:0x182500 00:19:27.946 [2024-11-07 10:47:42.104369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.946 [2024-11-07 10:47:42.104380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:25736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dc000 len:0x1000 key:0x182500 00:19:27.946 [2024-11-07 10:47:42.104389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.946 [2024-11-07 10:47:42.104400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:25744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043da000 len:0x1000 key:0x182500 00:19:27.946 [2024-11-07 10:47:42.104409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.946 [2024-11-07 10:47:42.104420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:25752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d8000 len:0x1000 key:0x182500 00:19:27.946 [2024-11-07 10:47:42.104429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.946 [2024-11-07 10:47:42.104440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d6000 len:0x1000 key:0x182500 00:19:27.946 [2024-11-07 10:47:42.104449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.946 [2024-11-07 10:47:42.104460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d4000 len:0x1000 key:0x182500 00:19:27.946 [2024-11-07 10:47:42.104469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.946 [2024-11-07 10:47:42.104480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:25776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d2000 len:0x1000 key:0x182500 00:19:27.946 [2024-11-07 10:47:42.104489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.946 [2024-11-07 10:47:42.104502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d0000 len:0x1000 key:0x182500 00:19:27.946 [2024-11-07 10:47:42.104518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.946 [2024-11-07 10:47:42.104529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:25792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x182500 00:19:27.946 [2024-11-07 10:47:42.104538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.946 [2024-11-07 10:47:42.104548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:25800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cc000 len:0x1000 key:0x182500 00:19:27.946 [2024-11-07 10:47:42.104559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.946 [2024-11-07 10:47:42.104569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ca000 len:0x1000 key:0x182500 00:19:27.947 [2024-11-07 10:47:42.104578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.947 [2024-11-07 10:47:42.104588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:25816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c8000 len:0x1000 key:0x182500 00:19:27.947 [2024-11-07 10:47:42.104597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.947 [2024-11-07 10:47:42.104608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c6000 len:0x1000 key:0x182500 00:19:27.947 [2024-11-07 10:47:42.104617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.947 [2024-11-07 10:47:42.104627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:25832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c4000 len:0x1000 key:0x182500 00:19:27.947 [2024-11-07 10:47:42.104636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.947 [2024-11-07 10:47:42.104647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:25840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c2000 len:0x1000 key:0x182500 00:19:27.947 [2024-11-07 10:47:42.104656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.947 [2024-11-07 10:47:42.104666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:25848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x182500 00:19:27.947 [2024-11-07 10:47:42.104675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.947 [2024-11-07 10:47:42.104686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043be000 len:0x1000 key:0x182500 00:19:27.947 [2024-11-07 10:47:42.104695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.947 [2024-11-07 10:47:42.104706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:25864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bc000 len:0x1000 key:0x182500 00:19:27.947 [2024-11-07 10:47:42.104715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.947 [2024-11-07 10:47:42.104725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ba000 len:0x1000 key:0x182500 00:19:27.947 [2024-11-07 10:47:42.104736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.947 [2024-11-07 10:47:42.104750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:25880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b8000 len:0x1000 key:0x182500 00:19:27.947 [2024-11-07 10:47:42.104760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.947 [2024-11-07 10:47:42.104771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:25888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b6000 len:0x1000 key:0x182500 00:19:27.947 [2024-11-07 10:47:42.104780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.947 [2024-11-07 10:47:42.104791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b4000 len:0x1000 key:0x182500 00:19:27.947 [2024-11-07 10:47:42.104800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.947 [2024-11-07 10:47:42.104811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:25904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b2000 len:0x1000 key:0x182500 00:19:27.947 [2024-11-07 10:47:42.104820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.947 [2024-11-07 10:47:42.104831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:25912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b0000 len:0x1000 key:0x182500 00:19:27.947 [2024-11-07 10:47:42.104840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.947 [2024-11-07 10:47:42.104851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x182500 00:19:27.947 [2024-11-07 10:47:42.104859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.947 [2024-11-07 10:47:42.104870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x182500 00:19:27.947 [2024-11-07 10:47:42.104879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.947 [2024-11-07 10:47:42.104890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x182500 00:19:27.947 [2024-11-07 10:47:42.104899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.947 [2024-11-07 10:47:42.104909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:25944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a8000 len:0x1000 key:0x182500 00:19:27.947 [2024-11-07 10:47:42.104918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.947 [2024-11-07 10:47:42.104928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:25952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a6000 len:0x1000 key:0x182500 00:19:27.947 [2024-11-07 10:47:42.104937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.947 [2024-11-07 10:47:42.104948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:25960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x182500 00:19:27.947 [2024-11-07 10:47:42.104956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.947 [2024-11-07 10:47:42.104969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:25968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x182500 00:19:27.947 [2024-11-07 10:47:42.104978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.947 [2024-11-07 10:47:42.104988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:25976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a0000 len:0x1000 key:0x182500 00:19:27.947 [2024-11-07 10:47:42.104998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.947 [2024-11-07 10:47:42.105008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:25984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x182500 00:19:27.947 [2024-11-07 10:47:42.105017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.947 [2024-11-07 10:47:42.105027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:25992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x182500 00:19:27.947 [2024-11-07 10:47:42.105036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.947 [2024-11-07 10:47:42.105047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:26000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x182500 00:19:27.947 [2024-11-07 10:47:42.105055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.947 [2024-11-07 10:47:42.105068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:26008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x182500 00:19:27.947 [2024-11-07 10:47:42.105077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.947 [2024-11-07 10:47:42.105088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x182500 00:19:27.947 [2024-11-07 10:47:42.105097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.947 [2024-11-07 10:47:42.105107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x182500 00:19:27.947 [2024-11-07 10:47:42.105116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.947 [2024-11-07 10:47:42.105126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:26032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x182500 00:19:27.947 [2024-11-07 10:47:42.105136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.947 [2024-11-07 10:47:42.105146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:26040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x182500 00:19:27.947 [2024-11-07 10:47:42.105155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.947 [2024-11-07 10:47:42.105166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x182500 00:19:27.947 [2024-11-07 10:47:42.105175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.947 [2024-11-07 10:47:42.105186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:26056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x182500 00:19:27.947 [2024-11-07 10:47:42.105195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.947 [2024-11-07 10:47:42.105206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:26064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x182500 00:19:27.947 [2024-11-07 10:47:42.105215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.947 [2024-11-07 10:47:42.105226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:26072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x182500 00:19:27.947 [2024-11-07 10:47:42.105234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.947 [2024-11-07 10:47:42.105245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:26080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x182500 00:19:27.947 [2024-11-07 10:47:42.105254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.947 [2024-11-07 10:47:42.105264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:26088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x182500 00:19:27.947 [2024-11-07 10:47:42.105273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.947 [2024-11-07 10:47:42.105284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:26096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x182500 00:19:27.947 [2024-11-07 10:47:42.105293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.948 [2024-11-07 10:47:42.105304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x182500 00:19:27.948 [2024-11-07 10:47:42.105312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.948 [2024-11-07 10:47:42.105322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:26112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x182500 00:19:27.948 [2024-11-07 10:47:42.105331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.948 [2024-11-07 10:47:42.105342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:26120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x182500 00:19:27.948 [2024-11-07 10:47:42.105352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.948 [2024-11-07 10:47:42.105362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x182500 00:19:27.948 [2024-11-07 10:47:42.105371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.948 [2024-11-07 10:47:42.105383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x182500 00:19:27.948 [2024-11-07 10:47:42.105392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.948 [2024-11-07 10:47:42.105403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:26144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x182500 00:19:27.948 [2024-11-07 10:47:42.105414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.948 [2024-11-07 10:47:42.105425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:26152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x182500 00:19:27.948 [2024-11-07 10:47:42.105434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.948 [2024-11-07 10:47:42.105444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x182500 00:19:27.948 [2024-11-07 10:47:42.105454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.948 [2024-11-07 10:47:42.105465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:26168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x182500 00:19:27.948 [2024-11-07 10:47:42.105474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.948 [2024-11-07 10:47:42.105484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:26176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x182500 00:19:27.948 [2024-11-07 10:47:42.105493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.948 [2024-11-07 10:47:42.105504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:26184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x182500 00:19:27.948 [2024-11-07 10:47:42.105520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.948 [2024-11-07 10:47:42.105531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x182500 00:19:27.948 [2024-11-07 10:47:42.105541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.948 [2024-11-07 10:47:42.105552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:26200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x182500 00:19:27.948 [2024-11-07 10:47:42.105561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.948 [2024-11-07 10:47:42.105571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:26208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x182500 00:19:27.948 [2024-11-07 10:47:42.105580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.948 [2024-11-07 10:47:42.105591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x182500 00:19:27.948 [2024-11-07 10:47:42.105600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.948 [2024-11-07 10:47:42.105610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:26224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x182500 00:19:27.948 [2024-11-07 10:47:42.105619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.948 [2024-11-07 10:47:42.105630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:26232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x182500 00:19:27.948 [2024-11-07 10:47:42.105638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.948 [2024-11-07 10:47:42.105651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:26240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x182500 00:19:27.948 [2024-11-07 10:47:42.105660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.948 [2024-11-07 10:47:42.105671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:26248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x182500 00:19:27.948 [2024-11-07 10:47:42.105680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.948 [2024-11-07 10:47:42.105690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:26256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x182500 00:19:27.948 [2024-11-07 10:47:42.105699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.948 [2024-11-07 10:47:42.105710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:26264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x182500 00:19:27.948 [2024-11-07 10:47:42.105719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.948 [2024-11-07 10:47:42.105730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x182500 00:19:27.948 [2024-11-07 10:47:42.105739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.948 [2024-11-07 10:47:42.105749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:26280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x182500 00:19:27.948 [2024-11-07 10:47:42.105758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.948 [2024-11-07 10:47:42.105768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:26288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x182500 00:19:27.948 [2024-11-07 10:47:42.105777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.948 [2024-11-07 10:47:42.105789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:26296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x182500 00:19:27.948 [2024-11-07 10:47:42.105798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.948 [2024-11-07 10:47:42.105809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434e000 len:0x1000 key:0x182500 00:19:27.948 [2024-11-07 10:47:42.105817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.948 [2024-11-07 10:47:42.105828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x182500 00:19:27.948 [2024-11-07 10:47:42.105837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.948 [2024-11-07 10:47:42.105847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:26320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434a000 len:0x1000 key:0x182500 00:19:27.948 [2024-11-07 10:47:42.105857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.948 [2024-11-07 10:47:42.105867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004348000 len:0x1000 key:0x182500 00:19:27.948 [2024-11-07 10:47:42.105877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.948 [2024-11-07 10:47:42.105888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:26336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004346000 len:0x1000 key:0x182500 00:19:27.948 [2024-11-07 10:47:42.105897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.948 [2024-11-07 10:47:42.105907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:26344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004344000 len:0x1000 key:0x182500 00:19:27.948 [2024-11-07 10:47:42.105917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.948 [2024-11-07 10:47:42.105927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:26352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x182500 00:19:27.948 [2024-11-07 10:47:42.105936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.948 [2024-11-07 10:47:42.105946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:26360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004340000 len:0x1000 key:0x182500 00:19:27.948 [2024-11-07 10:47:42.105955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.948 [2024-11-07 10:47:42.105967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:26368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433e000 len:0x1000 key:0x182500 00:19:27.948 [2024-11-07 10:47:42.105976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.948 [2024-11-07 10:47:42.105986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:26376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433c000 len:0x1000 key:0x182500 00:19:27.948 [2024-11-07 10:47:42.105995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.948 [2024-11-07 10:47:42.106006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:26384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433a000 len:0x1000 key:0x182500 00:19:27.948 [2024-11-07 10:47:42.106014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.948 [2024-11-07 10:47:42.106027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:26392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004338000 len:0x1000 key:0x182500 00:19:27.949 [2024-11-07 10:47:42.106037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.949 [2024-11-07 10:47:42.106047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:26400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004336000 len:0x1000 key:0x182500 00:19:27.949 [2024-11-07 10:47:42.106056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.949 [2024-11-07 10:47:42.106067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:26408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x182500 00:19:27.949 [2024-11-07 10:47:42.106076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.949 [2024-11-07 10:47:42.106087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:26416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004332000 len:0x1000 key:0x182500 00:19:27.949 [2024-11-07 10:47:42.106097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.949 [2024-11-07 10:47:42.106108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:26424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x182500 00:19:27.949 [2024-11-07 10:47:42.106117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.949 [2024-11-07 10:47:42.106127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:26432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432e000 len:0x1000 key:0x182500 00:19:27.949 [2024-11-07 10:47:42.106136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.949 [2024-11-07 10:47:42.106147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:26440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432c000 len:0x1000 key:0x182500 00:19:27.949 [2024-11-07 10:47:42.106156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.949 [2024-11-07 10:47:42.106166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:26448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432a000 len:0x1000 key:0x182500 00:19:27.949 [2024-11-07 10:47:42.106175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.949 [2024-11-07 10:47:42.106185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:26456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x182500 00:19:27.949 [2024-11-07 10:47:42.106194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.949 [2024-11-07 10:47:42.106205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:26464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004326000 len:0x1000 key:0x182500 00:19:27.949 [2024-11-07 10:47:42.106214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.949 [2024-11-07 10:47:42.106225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:26472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004324000 len:0x1000 key:0x182500 00:19:27.949 [2024-11-07 10:47:42.106233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.949 [2024-11-07 10:47:42.106244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004322000 len:0x1000 key:0x182500 00:19:27.949 [2024-11-07 10:47:42.106253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.949 [2024-11-07 10:47:42.106264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:26488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004320000 len:0x1000 key:0x182500 00:19:27.949 [2024-11-07 10:47:42.106273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.949 [2024-11-07 10:47:42.106283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:26496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431e000 len:0x1000 key:0x182500 00:19:27.949 [2024-11-07 10:47:42.106292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.949 [2024-11-07 10:47:42.106302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:26504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431c000 len:0x1000 key:0x182500 00:19:27.949 [2024-11-07 10:47:42.106312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.949 [2024-11-07 10:47:42.106324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:26512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431a000 len:0x1000 key:0x182500 00:19:27.949 [2024-11-07 10:47:42.106333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.949 [2024-11-07 10:47:42.106345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:26520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004318000 len:0x1000 key:0x182500 00:19:27.949 [2024-11-07 10:47:42.106354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.949 [2024-11-07 10:47:42.106364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:26528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004316000 len:0x1000 key:0x182500 00:19:27.949 [2024-11-07 10:47:42.106374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.949 [2024-11-07 10:47:42.106384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:26536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004314000 len:0x1000 key:0x182500 00:19:27.949 [2024-11-07 10:47:42.106394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.949 [2024-11-07 10:47:42.106404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004312000 len:0x1000 key:0x182500 00:19:27.949 [2024-11-07 10:47:42.115684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.949 [2024-11-07 10:47:42.115698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:26552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004310000 len:0x1000 key:0x182500 00:19:27.949 [2024-11-07 10:47:42.115708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.949 [2024-11-07 10:47:42.115718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:26560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430e000 len:0x1000 key:0x182500 00:19:27.949 [2024-11-07 10:47:42.115728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.949 [2024-11-07 10:47:42.115739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:26568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430c000 len:0x1000 key:0x182500 00:19:27.949 [2024-11-07 10:47:42.115748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.949 [2024-11-07 10:47:42.115759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430a000 len:0x1000 key:0x182500 00:19:27.949 [2024-11-07 10:47:42.115769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.949 [2024-11-07 10:47:42.115779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:26584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004308000 len:0x1000 key:0x182500 00:19:27.949 [2024-11-07 10:47:42.115788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.949 [2024-11-07 10:47:42.115799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004306000 len:0x1000 key:0x182500 00:19:27.949 [2024-11-07 10:47:42.115808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.949 [2024-11-07 10:47:42.115819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:26600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004304000 len:0x1000 key:0x182500 00:19:27.949 [2024-11-07 10:47:42.115829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.949 [2024-11-07 10:47:42.115839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:26608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004302000 len:0x1000 key:0x182500 00:19:27.949 [2024-11-07 10:47:42.115848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.949 [2024-11-07 10:47:42.115860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:26616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004300000 len:0x1000 key:0x182500 00:19:27.949 [2024-11-07 10:47:42.115869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.949 [2024-11-07 10:47:42.115880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:26624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.949 [2024-11-07 10:47:42.115889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.949 [2024-11-07 10:47:42.117797] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.949 [2024-11-07 10:47:42.117817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.949 [2024-11-07 10:47:42.117829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26632 len:8 PRP1 0x0 PRP2 0x0 00:19:27.949 [2024-11-07 10:47:42.117842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.949 [2024-11-07 10:47:42.117899] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:19:27.949 [2024-11-07 10:47:42.117915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:27.949 [2024-11-07 10:47:42.117960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:27.949 [2024-11-07 10:47:42.117975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32765 cdw0:1783cd0 sqhd:dc10 p:0 m:0 dnr:0 00:19:27.949 [2024-11-07 10:47:42.117990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:27.949 [2024-11-07 10:47:42.118002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32765 cdw0:1783cd0 sqhd:dc10 p:0 m:0 dnr:0 00:19:27.949 [2024-11-07 10:47:42.118015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:27.949 [2024-11-07 10:47:42.118027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32765 cdw0:1783cd0 sqhd:dc10 p:0 m:0 dnr:0 00:19:27.949 [2024-11-07 10:47:42.118040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:27.949 [2024-11-07 10:47:42.118052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32765 cdw0:1783cd0 sqhd:dc10 p:0 m:0 dnr:0 00:19:27.949 [2024-11-07 10:47:42.135410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:19:27.949 [2024-11-07 10:47:42.135428] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:19:27.949 [2024-11-07 10:47:42.135441] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:19:27.950 [2024-11-07 10:47:42.138315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:27.950 [2024-11-07 10:47:42.176953] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:19:27.950 11600.67 IOPS, 45.32 MiB/s [2024-11-07T09:47:55.621Z] 13262.25 IOPS, 51.81 MiB/s [2024-11-07T09:47:55.621Z] 12515.80 IOPS, 48.89 MiB/s [2024-11-07T09:47:55.621Z] [2024-11-07 10:47:45.564151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:119632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004322000 len:0x1000 key:0x183a00 00:19:27.950 [2024-11-07 10:47:45.564191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.950 [2024-11-07 10:47:45.564208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:119640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004324000 len:0x1000 key:0x183a00 00:19:27.950 [2024-11-07 10:47:45.564218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.950 [2024-11-07 10:47:45.564229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:119648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004326000 len:0x1000 key:0x183a00 00:19:27.950 [2024-11-07 10:47:45.564239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.950 [2024-11-07 10:47:45.564250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:119656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x183a00 00:19:27.950 [2024-11-07 10:47:45.564259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.950 [2024-11-07 10:47:45.564270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:119664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432a000 len:0x1000 key:0x183a00 00:19:27.950 [2024-11-07 10:47:45.564279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.950 [2024-11-07 10:47:45.564290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:119672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432c000 len:0x1000 key:0x183a00 00:19:27.950 [2024-11-07 10:47:45.564299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.950 [2024-11-07 10:47:45.564309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:119680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432e000 len:0x1000 key:0x183a00 00:19:27.950 [2024-11-07 10:47:45.564318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.950 [2024-11-07 10:47:45.564329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:120264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.950 [2024-11-07 10:47:45.564339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.950 [2024-11-07 10:47:45.564349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:120272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.950 [2024-11-07 10:47:45.564358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.950 [2024-11-07 10:47:45.564369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:120280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.950 [2024-11-07 10:47:45.564378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.950 [2024-11-07 10:47:45.564388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:120288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.950 [2024-11-07 10:47:45.564398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.950 [2024-11-07 10:47:45.564417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.950 [2024-11-07 10:47:45.564427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.950 [2024-11-07 10:47:45.564437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:120304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.950 [2024-11-07 10:47:45.564447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.950 [2024-11-07 10:47:45.564457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:120312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.950 [2024-11-07 10:47:45.564469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.950 [2024-11-07 10:47:45.564480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:120320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.950 [2024-11-07 10:47:45.564489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.950 [2024-11-07 10:47:45.564500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:119688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x183a00 00:19:27.950 [2024-11-07 10:47:45.564512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.950 [2024-11-07 10:47:45.564524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:119696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x183a00 00:19:27.950 [2024-11-07 10:47:45.564533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.950 [2024-11-07 10:47:45.564544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:119704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x183a00 00:19:27.950 [2024-11-07 10:47:45.564554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.950 [2024-11-07 10:47:45.564564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:119712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x183a00 00:19:27.950 [2024-11-07 10:47:45.564573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.950 [2024-11-07 10:47:45.564585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:119720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x183a00 00:19:27.950 [2024-11-07 10:47:45.564595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.950 [2024-11-07 10:47:45.564607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:119728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x183a00 00:19:27.950 [2024-11-07 10:47:45.564617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.950 [2024-11-07 10:47:45.564628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x183a00 00:19:27.950 [2024-11-07 10:47:45.564637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.950 [2024-11-07 10:47:45.564649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:119744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x183a00 00:19:27.950 [2024-11-07 10:47:45.564659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.950 [2024-11-07 10:47:45.564672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.950 [2024-11-07 10:47:45.564682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.950 [2024-11-07 10:47:45.564693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:120336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.950 [2024-11-07 10:47:45.564703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.950 [2024-11-07 10:47:45.564713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:120344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.950 [2024-11-07 10:47:45.564723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.950 [2024-11-07 10:47:45.564734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:120352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.950 [2024-11-07 10:47:45.564743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.950 [2024-11-07 10:47:45.564754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:120360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.950 [2024-11-07 10:47:45.564762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.950 [2024-11-07 10:47:45.564773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:120368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.950 [2024-11-07 10:47:45.564782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.950 [2024-11-07 10:47:45.564792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:120376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.950 [2024-11-07 10:47:45.564801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.951 [2024-11-07 10:47:45.564811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:120384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.951 [2024-11-07 10:47:45.564820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.951 [2024-11-07 10:47:45.564831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430e000 len:0x1000 key:0x183a00 00:19:27.951 [2024-11-07 10:47:45.564840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.951 [2024-11-07 10:47:45.564850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:119760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430c000 len:0x1000 key:0x183a00 00:19:27.951 [2024-11-07 10:47:45.564859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.951 [2024-11-07 10:47:45.564871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:119768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430a000 len:0x1000 key:0x183a00 00:19:27.951 [2024-11-07 10:47:45.564880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.951 [2024-11-07 10:47:45.564890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:119776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004308000 len:0x1000 key:0x183a00 00:19:27.951 [2024-11-07 10:47:45.564899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.951 [2024-11-07 10:47:45.564911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:119784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004306000 len:0x1000 key:0x183a00 00:19:27.951 [2024-11-07 10:47:45.564920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.951 [2024-11-07 10:47:45.564930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:119792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004304000 len:0x1000 key:0x183a00 00:19:27.951 [2024-11-07 10:47:45.564939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.951 [2024-11-07 10:47:45.564950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:119800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004302000 len:0x1000 key:0x183a00 00:19:27.951 [2024-11-07 10:47:45.564959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.951 [2024-11-07 10:47:45.564969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:119808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004300000 len:0x1000 key:0x183a00 00:19:27.951 [2024-11-07 10:47:45.564978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.951 [2024-11-07 10:47:45.564989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:120392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.951 [2024-11-07 10:47:45.564998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.951 [2024-11-07 10:47:45.565008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:120400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.951 [2024-11-07 10:47:45.565017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.951 [2024-11-07 10:47:45.565027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:120408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.951 [2024-11-07 10:47:45.565036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.951 [2024-11-07 10:47:45.565046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:120416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.951 [2024-11-07 10:47:45.565056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.951 [2024-11-07 10:47:45.565067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:120424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.951 [2024-11-07 10:47:45.565076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.951 [2024-11-07 10:47:45.565086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:120432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.951 [2024-11-07 10:47:45.565095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.951 [2024-11-07 10:47:45.565106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:120440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.951 [2024-11-07 10:47:45.565115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.951 [2024-11-07 10:47:45.565125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:120448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.951 [2024-11-07 10:47:45.565134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.951 [2024-11-07 10:47:45.565148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:119816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x183a00 00:19:27.951 [2024-11-07 10:47:45.565157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.951 [2024-11-07 10:47:45.565167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:119824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x183a00 00:19:27.951 [2024-11-07 10:47:45.565176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.951 [2024-11-07 10:47:45.565187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:119832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x183a00 00:19:27.951 [2024-11-07 10:47:45.565197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.951 [2024-11-07 10:47:45.565208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:119840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x183a00 00:19:27.951 [2024-11-07 10:47:45.565217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.951 [2024-11-07 10:47:45.565227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:119848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x183a00 00:19:27.951 [2024-11-07 10:47:45.565236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.951 [2024-11-07 10:47:45.565247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:119856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x183a00 00:19:27.951 [2024-11-07 10:47:45.565256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.951 [2024-11-07 10:47:45.565267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:119864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x183a00 00:19:27.951 [2024-11-07 10:47:45.565277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.951 [2024-11-07 10:47:45.565287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:119872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x183a00 00:19:27.951 [2024-11-07 10:47:45.565296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.951 [2024-11-07 10:47:45.565307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:119880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x183a00 00:19:27.951 [2024-11-07 10:47:45.565316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.951 [2024-11-07 10:47:45.565327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:119888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x183a00 00:19:27.951 [2024-11-07 10:47:45.565337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.951 [2024-11-07 10:47:45.565347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:119896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x183a00 00:19:27.951 [2024-11-07 10:47:45.565356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.951 [2024-11-07 10:47:45.565368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:119904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x183a00 00:19:27.951 [2024-11-07 10:47:45.565378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.951 [2024-11-07 10:47:45.565388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:119912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x183a00 00:19:27.951 [2024-11-07 10:47:45.565398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.951 [2024-11-07 10:47:45.565408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:119920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x183a00 00:19:27.951 [2024-11-07 10:47:45.565418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.951 [2024-11-07 10:47:45.565428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:119928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x183a00 00:19:27.951 [2024-11-07 10:47:45.565438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.951 [2024-11-07 10:47:45.565448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:119936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x183a00 00:19:27.951 [2024-11-07 10:47:45.565458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.951 [2024-11-07 10:47:45.565468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:119944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x183a00 00:19:27.951 [2024-11-07 10:47:45.565477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.951 [2024-11-07 10:47:45.565487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:119952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x183a00 00:19:27.951 [2024-11-07 10:47:45.565497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.951 [2024-11-07 10:47:45.565513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:119960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x183a00 00:19:27.951 [2024-11-07 10:47:45.565523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.951 [2024-11-07 10:47:45.565534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:119968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x183a00 00:19:27.951 [2024-11-07 10:47:45.565543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.951 [2024-11-07 10:47:45.565554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:119976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x183a00 00:19:27.952 [2024-11-07 10:47:45.565563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.952 [2024-11-07 10:47:45.565574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:119984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x183a00 00:19:27.952 [2024-11-07 10:47:45.565583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.952 [2024-11-07 10:47:45.565594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:119992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x183a00 00:19:27.952 [2024-11-07 10:47:45.565605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.952 [2024-11-07 10:47:45.565615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:120000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x183a00 00:19:27.952 [2024-11-07 10:47:45.565624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.952 [2024-11-07 10:47:45.565635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:120456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.952 [2024-11-07 10:47:45.565644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.952 [2024-11-07 10:47:45.565658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.952 [2024-11-07 10:47:45.565667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.952 [2024-11-07 10:47:45.565677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:120472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.952 [2024-11-07 10:47:45.565686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.952 [2024-11-07 10:47:45.565697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:120480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.952 [2024-11-07 10:47:45.565707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.952 [2024-11-07 10:47:45.565717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:120488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.952 [2024-11-07 10:47:45.565726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.952 [2024-11-07 10:47:45.565736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:120496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.952 [2024-11-07 10:47:45.565745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.952 [2024-11-07 10:47:45.565755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:120504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.952 [2024-11-07 10:47:45.565764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.952 [2024-11-07 10:47:45.565775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.952 [2024-11-07 10:47:45.565784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.952 [2024-11-07 10:47:45.565794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:120520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.952 [2024-11-07 10:47:45.565802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.952 [2024-11-07 10:47:45.565813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.952 [2024-11-07 10:47:45.565823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.952 [2024-11-07 10:47:45.565833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:120536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.952 [2024-11-07 10:47:45.565842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.952 [2024-11-07 10:47:45.565854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:120544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.952 [2024-11-07 10:47:45.565863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.952 [2024-11-07 10:47:45.565873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:120552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.952 [2024-11-07 10:47:45.565883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.952 [2024-11-07 10:47:45.565893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:120560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.952 [2024-11-07 10:47:45.565902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.952 [2024-11-07 10:47:45.565913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:120568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.952 [2024-11-07 10:47:45.565921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.952 [2024-11-07 10:47:45.565933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:120576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.952 [2024-11-07 10:47:45.565942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.952 [2024-11-07 10:47:45.565952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:120008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431e000 len:0x1000 key:0x183a00 00:19:27.952 [2024-11-07 10:47:45.565962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.952 [2024-11-07 10:47:45.565972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431c000 len:0x1000 key:0x183a00 00:19:27.952 [2024-11-07 10:47:45.565981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.952 [2024-11-07 10:47:45.565993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:120024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431a000 len:0x1000 key:0x183a00 00:19:27.952 [2024-11-07 10:47:45.566003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.952 [2024-11-07 10:47:45.566014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004318000 len:0x1000 key:0x183a00 00:19:27.952 [2024-11-07 10:47:45.566022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.952 [2024-11-07 10:47:45.566033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:120040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004316000 len:0x1000 key:0x183a00 00:19:27.952 [2024-11-07 10:47:45.566042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.952 [2024-11-07 10:47:45.566052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:120048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004314000 len:0x1000 key:0x183a00 00:19:27.952 [2024-11-07 10:47:45.566062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.952 [2024-11-07 10:47:45.566073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:120056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004312000 len:0x1000 key:0x183a00 00:19:27.952 [2024-11-07 10:47:45.566084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.952 [2024-11-07 10:47:45.566094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:120064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004310000 len:0x1000 key:0x183a00 00:19:27.952 [2024-11-07 10:47:45.566103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.952 [2024-11-07 10:47:45.566113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:120584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.952 [2024-11-07 10:47:45.566123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.952 [2024-11-07 10:47:45.566133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:120592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.952 [2024-11-07 10:47:45.566142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.952 [2024-11-07 10:47:45.566153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:120600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.952 [2024-11-07 10:47:45.566161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.952 [2024-11-07 10:47:45.566172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:120608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.952 [2024-11-07 10:47:45.566181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.952 [2024-11-07 10:47:45.566191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:120616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.952 [2024-11-07 10:47:45.566200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.952 [2024-11-07 10:47:45.566210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:120624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.952 [2024-11-07 10:47:45.566219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.952 [2024-11-07 10:47:45.566229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:120632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.952 [2024-11-07 10:47:45.566238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.952 [2024-11-07 10:47:45.566249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.952 [2024-11-07 10:47:45.566258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.952 [2024-11-07 10:47:45.566268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:120072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004340000 len:0x1000 key:0x183a00 00:19:27.952 [2024-11-07 10:47:45.566277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.952 [2024-11-07 10:47:45.566287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:120080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x183a00 00:19:27.952 [2024-11-07 10:47:45.566296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.952 [2024-11-07 10:47:45.566307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:120088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004344000 len:0x1000 key:0x183a00 00:19:27.952 [2024-11-07 10:47:45.566318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.953 [2024-11-07 10:47:45.566329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:120096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004346000 len:0x1000 key:0x183a00 00:19:27.953 [2024-11-07 10:47:45.566338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.953 [2024-11-07 10:47:45.566348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:120104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004348000 len:0x1000 key:0x183a00 00:19:27.953 [2024-11-07 10:47:45.566358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.953 [2024-11-07 10:47:45.566369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:120112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434a000 len:0x1000 key:0x183a00 00:19:27.953 [2024-11-07 10:47:45.566378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.953 [2024-11-07 10:47:45.566389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:120120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x183a00 00:19:27.953 [2024-11-07 10:47:45.566398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.953 [2024-11-07 10:47:45.566408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:120128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434e000 len:0x1000 key:0x183a00 00:19:27.953 [2024-11-07 10:47:45.566418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.953 [2024-11-07 10:47:45.566428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:120136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x183a00 00:19:27.953 [2024-11-07 10:47:45.566438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.953 [2024-11-07 10:47:45.566448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:120144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004332000 len:0x1000 key:0x183a00 00:19:27.953 [2024-11-07 10:47:45.566457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.953 [2024-11-07 10:47:45.566469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:120152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x183a00 00:19:27.953 [2024-11-07 10:47:45.566479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.953 [2024-11-07 10:47:45.566489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:120160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004336000 len:0x1000 key:0x183a00 00:19:27.953 [2024-11-07 10:47:45.566498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.953 [2024-11-07 10:47:45.566512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:120168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004338000 len:0x1000 key:0x183a00 00:19:27.953 [2024-11-07 10:47:45.566521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.953 [2024-11-07 10:47:45.566532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433a000 len:0x1000 key:0x183a00 00:19:27.953 [2024-11-07 10:47:45.566541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.953 [2024-11-07 10:47:45.566554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:120184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433c000 len:0x1000 key:0x183a00 00:19:27.953 [2024-11-07 10:47:45.566564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.953 [2024-11-07 10:47:45.566574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:120192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433e000 len:0x1000 key:0x183a00 00:19:27.953 [2024-11-07 10:47:45.566583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.953 [2024-11-07 10:47:45.566594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:120200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x183a00 00:19:27.953 [2024-11-07 10:47:45.566604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.953 [2024-11-07 10:47:45.566615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:120208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x183a00 00:19:27.953 [2024-11-07 10:47:45.566624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.953 [2024-11-07 10:47:45.566635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:120216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x183a00 00:19:27.953 [2024-11-07 10:47:45.566644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.953 [2024-11-07 10:47:45.566654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:120224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x183a00 00:19:27.953 [2024-11-07 10:47:45.566665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.953 [2024-11-07 10:47:45.566676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:120232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x183a00 00:19:27.953 [2024-11-07 10:47:45.566685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.953 [2024-11-07 10:47:45.566696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:120240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x183a00 00:19:27.953 [2024-11-07 10:47:45.566705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.953 [2024-11-07 10:47:45.566715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:120248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x183a00 00:19:27.953 [2024-11-07 10:47:45.566725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.953 [2024-11-07 10:47:45.566736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:120256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x183a00 00:19:27.953 [2024-11-07 10:47:45.566745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.953 [2024-11-07 10:47:45.568618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.953 [2024-11-07 10:47:45.568631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.953 [2024-11-07 10:47:45.568640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120648 len:8 PRP1 0x0 PRP2 0x0 00:19:27.953 [2024-11-07 10:47:45.568653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.953 [2024-11-07 10:47:45.568700] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:19:27.953 [2024-11-07 10:47:45.568712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:27.953 [2024-11-07 10:47:45.571448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:27.953 [2024-11-07 10:47:45.585744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] CQ transport error -6 (No such device or address) on qpair id 0 00:19:27.953 [2024-11-07 10:47:45.628875] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:19:27.953 11637.67 IOPS, 45.46 MiB/s [2024-11-07T09:47:55.624Z] 12609.57 IOPS, 49.26 MiB/s [2024-11-07T09:47:55.624Z] 13339.88 IOPS, 52.11 MiB/s [2024-11-07T09:47:55.624Z] 13761.22 IOPS, 53.75 MiB/s [2024-11-07T09:47:55.624Z] [2024-11-07 10:47:49.969490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434e000 len:0x1000 key:0x182500 00:19:27.953 [2024-11-07 10:47:49.969534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.953 [2024-11-07 10:47:49.969551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x182500 00:19:27.953 [2024-11-07 10:47:49.969561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.953 [2024-11-07 10:47:49.969573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:99096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434a000 len:0x1000 key:0x182500 00:19:27.953 [2024-11-07 10:47:49.969582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.953 [2024-11-07 10:47:49.969593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004348000 len:0x1000 key:0x182500 00:19:27.953 [2024-11-07 10:47:49.969602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.953 [2024-11-07 10:47:49.969613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004338000 len:0x1000 key:0x182500 00:19:27.953 [2024-11-07 10:47:49.969622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.953 [2024-11-07 10:47:49.969633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433a000 len:0x1000 key:0x182500 00:19:27.953 [2024-11-07 10:47:49.969642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.953 [2024-11-07 10:47:49.969652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:99128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433c000 len:0x1000 key:0x182500 00:19:27.953 [2024-11-07 10:47:49.969662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.953 [2024-11-07 10:47:49.969673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433e000 len:0x1000 key:0x182500 00:19:27.953 [2024-11-07 10:47:49.969682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.953 [2024-11-07 10:47:49.969692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x182500 00:19:27.953 [2024-11-07 10:47:49.969711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.953 [2024-11-07 10:47:49.969722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:99152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x182500 00:19:27.953 [2024-11-07 10:47:49.969732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.953 [2024-11-07 10:47:49.969743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x182500 00:19:27.953 [2024-11-07 10:47:49.969753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.954 [2024-11-07 10:47:49.969765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x182500 00:19:27.954 [2024-11-07 10:47:49.969775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.954 [2024-11-07 10:47:49.969786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x182500 00:19:27.954 [2024-11-07 10:47:49.969795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.954 [2024-11-07 10:47:49.969806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:99184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x182500 00:19:27.954 [2024-11-07 10:47:49.969815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.954 [2024-11-07 10:47:49.969828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:99192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x182500 00:19:27.954 [2024-11-07 10:47:49.969838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.954 [2024-11-07 10:47:49.969850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x182500 00:19:27.954 [2024-11-07 10:47:49.969860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.954 [2024-11-07 10:47:49.969871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.954 [2024-11-07 10:47:49.969880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.954 [2024-11-07 10:47:49.969893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.954 [2024-11-07 10:47:49.969902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.954 [2024-11-07 10:47:49.969914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.954 [2024-11-07 10:47:49.969924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.954 [2024-11-07 10:47:49.969935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.954 [2024-11-07 10:47:49.969945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.954 [2024-11-07 10:47:49.969955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.954 [2024-11-07 10:47:49.969968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.954 [2024-11-07 10:47:49.969979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.954 [2024-11-07 10:47:49.969988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.954 [2024-11-07 10:47:49.969998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.954 [2024-11-07 10:47:49.970008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.954 [2024-11-07 10:47:49.970018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.954 [2024-11-07 10:47:49.970027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.954 [2024-11-07 10:47:49.970038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004310000 len:0x1000 key:0x182500 00:19:27.954 [2024-11-07 10:47:49.970047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.954 [2024-11-07 10:47:49.970058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004312000 len:0x1000 key:0x182500 00:19:27.954 [2024-11-07 10:47:49.970068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.954 [2024-11-07 10:47:49.970079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004314000 len:0x1000 key:0x182500 00:19:27.954 [2024-11-07 10:47:49.970088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.954 [2024-11-07 10:47:49.970098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004316000 len:0x1000 key:0x182500 00:19:27.954 [2024-11-07 10:47:49.970108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.954 [2024-11-07 10:47:49.970118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004318000 len:0x1000 key:0x182500 00:19:27.954 [2024-11-07 10:47:49.970127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.954 [2024-11-07 10:47:49.970138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431a000 len:0x1000 key:0x182500 00:19:27.954 [2024-11-07 10:47:49.970147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.954 [2024-11-07 10:47:49.970157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431c000 len:0x1000 key:0x182500 00:19:27.954 [2024-11-07 10:47:49.970166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.954 [2024-11-07 10:47:49.970176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431e000 len:0x1000 key:0x182500 00:19:27.954 [2024-11-07 10:47:49.970186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.954 [2024-11-07 10:47:49.970198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.954 [2024-11-07 10:47:49.970207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.954 [2024-11-07 10:47:49.970218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.954 [2024-11-07 10:47:49.970227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.954 [2024-11-07 10:47:49.970237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.954 [2024-11-07 10:47:49.970246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.954 [2024-11-07 10:47:49.970256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.954 [2024-11-07 10:47:49.970265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.954 [2024-11-07 10:47:49.970275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.954 [2024-11-07 10:47:49.970285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.954 [2024-11-07 10:47:49.970296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.954 [2024-11-07 10:47:49.970305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.954 [2024-11-07 10:47:49.970315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.954 [2024-11-07 10:47:49.970324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.954 [2024-11-07 10:47:49.970334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.954 [2024-11-07 10:47:49.970343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.954 [2024-11-07 10:47:49.970354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.954 [2024-11-07 10:47:49.970363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.954 [2024-11-07 10:47:49.970373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:99792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.954 [2024-11-07 10:47:49.970382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.954 [2024-11-07 10:47:49.970392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.954 [2024-11-07 10:47:49.970401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.954 [2024-11-07 10:47:49.970413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.954 [2024-11-07 10:47:49.970422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.954 [2024-11-07 10:47:49.970433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.954 [2024-11-07 10:47:49.970444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.954 [2024-11-07 10:47:49.970454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.955 [2024-11-07 10:47:49.970463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.955 [2024-11-07 10:47:49.970473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.955 [2024-11-07 10:47:49.970482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.955 [2024-11-07 10:47:49.970492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.955 [2024-11-07 10:47:49.970501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.955 [2024-11-07 10:47:49.970518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.955 [2024-11-07 10:47:49.970527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.955 [2024-11-07 10:47:49.970538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.955 [2024-11-07 10:47:49.970547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.955 [2024-11-07 10:47:49.970558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.955 [2024-11-07 10:47:49.970567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.955 [2024-11-07 10:47:49.970577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.955 [2024-11-07 10:47:49.970586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.955 [2024-11-07 10:47:49.970597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:99272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x182500 00:19:27.955 [2024-11-07 10:47:49.970606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.955 [2024-11-07 10:47:49.970616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x182500 00:19:27.955 [2024-11-07 10:47:49.970625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.955 [2024-11-07 10:47:49.970636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x182500 00:19:27.955 [2024-11-07 10:47:49.970645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.955 [2024-11-07 10:47:49.970656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x182500 00:19:27.955 [2024-11-07 10:47:49.970665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.955 [2024-11-07 10:47:49.970676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:99304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x182500 00:19:27.955 [2024-11-07 10:47:49.970687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.955 [2024-11-07 10:47:49.970697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004332000 len:0x1000 key:0x182500 00:19:27.955 [2024-11-07 10:47:49.970707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.955 [2024-11-07 10:47:49.970718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x182500 00:19:27.955 [2024-11-07 10:47:49.970727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.955 [2024-11-07 10:47:49.970738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004336000 len:0x1000 key:0x182500 00:19:27.955 [2024-11-07 10:47:49.970747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.955 [2024-11-07 10:47:49.970757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.955 [2024-11-07 10:47:49.970766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.955 [2024-11-07 10:47:49.970777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.955 [2024-11-07 10:47:49.970787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.955 [2024-11-07 10:47:49.970797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.955 [2024-11-07 10:47:49.970806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.955 [2024-11-07 10:47:49.970816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.955 [2024-11-07 10:47:49.970825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.955 [2024-11-07 10:47:49.970836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.955 [2024-11-07 10:47:49.970845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.955 [2024-11-07 10:47:49.970856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.955 [2024-11-07 10:47:49.970865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.955 [2024-11-07 10:47:49.970875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.955 [2024-11-07 10:47:49.970884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.955 [2024-11-07 10:47:49.970895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.955 [2024-11-07 10:47:49.970904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.955 [2024-11-07 10:47:49.970914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.955 [2024-11-07 10:47:49.970924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.955 [2024-11-07 10:47:49.970935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.955 [2024-11-07 10:47:49.970944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.955 [2024-11-07 10:47:49.970954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.955 [2024-11-07 10:47:49.970964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.955 [2024-11-07 10:47:49.970974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.955 [2024-11-07 10:47:49.970982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.955 [2024-11-07 10:47:49.970993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004320000 len:0x1000 key:0x182500 00:19:27.955 [2024-11-07 10:47:49.971002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.955 [2024-11-07 10:47:49.971013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004322000 len:0x1000 key:0x182500 00:19:27.955 [2024-11-07 10:47:49.971022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.955 [2024-11-07 10:47:49.971032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:99352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004324000 len:0x1000 key:0x182500 00:19:27.955 [2024-11-07 10:47:49.971042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.955 [2024-11-07 10:47:49.971052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:99360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004326000 len:0x1000 key:0x182500 00:19:27.955 [2024-11-07 10:47:49.971061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.955 [2024-11-07 10:47:49.971072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x182500 00:19:27.955 [2024-11-07 10:47:49.971081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.955 [2024-11-07 10:47:49.971092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432a000 len:0x1000 key:0x182500 00:19:27.955 [2024-11-07 10:47:49.971101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.955 [2024-11-07 10:47:49.971111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432c000 len:0x1000 key:0x182500 00:19:27.955 [2024-11-07 10:47:49.971120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.955 [2024-11-07 10:47:49.971131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432e000 len:0x1000 key:0x182500 00:19:27.955 [2024-11-07 10:47:49.971141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.955 [2024-11-07 10:47:49.971152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.955 [2024-11-07 10:47:49.971161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.955 [2024-11-07 10:47:49.971173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.956 [2024-11-07 10:47:49.971182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.956 [2024-11-07 10:47:49.971193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.956 [2024-11-07 10:47:49.971202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.956 [2024-11-07 10:47:49.971212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.956 [2024-11-07 10:47:49.971221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.956 [2024-11-07 10:47:49.971231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.956 [2024-11-07 10:47:49.971241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.956 [2024-11-07 10:47:49.971252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.956 [2024-11-07 10:47:49.971261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.956 [2024-11-07 10:47:49.971271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.956 [2024-11-07 10:47:49.971281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.956 [2024-11-07 10:47:49.971291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.956 [2024-11-07 10:47:49.971300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.956 [2024-11-07 10:47:49.971311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.956 [2024-11-07 10:47:49.971320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.956 [2024-11-07 10:47:49.971331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.956 [2024-11-07 10:47:49.971339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.956 [2024-11-07 10:47:49.971350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.956 [2024-11-07 10:47:49.971359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.956 [2024-11-07 10:47:49.971369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.956 [2024-11-07 10:47:49.971378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.956 [2024-11-07 10:47:49.971389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.956 [2024-11-07 10:47:49.971399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.956 [2024-11-07 10:47:49.971409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.956 [2024-11-07 10:47:49.971418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.956 [2024-11-07 10:47:49.971428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.956 [2024-11-07 10:47:49.971437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.956 [2024-11-07 10:47:49.971448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.956 [2024-11-07 10:47:49.971456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.956 [2024-11-07 10:47:49.971467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x182500 00:19:27.956 [2024-11-07 10:47:49.971476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.956 [2024-11-07 10:47:49.971487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x182500 00:19:27.956 [2024-11-07 10:47:49.971497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.956 [2024-11-07 10:47:49.971512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x182500 00:19:27.956 [2024-11-07 10:47:49.971521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.956 [2024-11-07 10:47:49.971531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x182500 00:19:27.956 [2024-11-07 10:47:49.971541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.956 [2024-11-07 10:47:49.971551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x182500 00:19:27.956 [2024-11-07 10:47:49.971562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.956 [2024-11-07 10:47:49.971573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x182500 00:19:27.956 [2024-11-07 10:47:49.971582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.956 [2024-11-07 10:47:49.971593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:99448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x182500 00:19:27.956 [2024-11-07 10:47:49.971602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.956 [2024-11-07 10:47:49.971612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x182500 00:19:27.956 [2024-11-07 10:47:49.971622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.956 [2024-11-07 10:47:49.971633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x182500 00:19:27.956 [2024-11-07 10:47:49.971644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.956 [2024-11-07 10:47:49.971656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x182500 00:19:27.956 [2024-11-07 10:47:49.971665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.956 [2024-11-07 10:47:49.971676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:99480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x182500 00:19:27.956 [2024-11-07 10:47:49.971686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.956 [2024-11-07 10:47:49.971697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x182500 00:19:27.956 [2024-11-07 10:47:49.971707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.956 [2024-11-07 10:47:49.971717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:99496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004346000 len:0x1000 key:0x182500 00:19:27.956 [2024-11-07 10:47:49.971726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.956 [2024-11-07 10:47:49.971737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:99504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004344000 len:0x1000 key:0x182500 00:19:27.956 [2024-11-07 10:47:49.971746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.956 [2024-11-07 10:47:49.971757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x182500 00:19:27.956 [2024-11-07 10:47:49.971766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.956 [2024-11-07 10:47:49.971777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:99520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004340000 len:0x1000 key:0x182500 00:19:27.956 [2024-11-07 10:47:49.971786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.956 [2024-11-07 10:47:49.971796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x182500 00:19:27.956 [2024-11-07 10:47:49.971806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.956 [2024-11-07 10:47:49.971817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x182500 00:19:27.956 [2024-11-07 10:47:49.971826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.956 [2024-11-07 10:47:49.971837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x182500 00:19:27.956 [2024-11-07 10:47:49.971846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.956 [2024-11-07 10:47:49.971857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x182500 00:19:27.956 [2024-11-07 10:47:49.971867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.956 [2024-11-07 10:47:49.971878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:99560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x182500 00:19:27.956 [2024-11-07 10:47:49.971887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.956 [2024-11-07 10:47:49.971898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x182500 00:19:27.956 [2024-11-07 10:47:49.971907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.956 [2024-11-07 10:47:49.971917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x182500 00:19:27.956 [2024-11-07 10:47:49.971927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.957 [2024-11-07 10:47:49.971937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x182500 00:19:27.957 [2024-11-07 10:47:49.971946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.957 [2024-11-07 10:47:49.971957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:99592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430e000 len:0x1000 key:0x182500 00:19:27.957 [2024-11-07 10:47:49.971966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.957 [2024-11-07 10:47:49.971976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430c000 len:0x1000 key:0x182500 00:19:27.957 [2024-11-07 10:47:49.971986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.957 [2024-11-07 10:47:49.971996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430a000 len:0x1000 key:0x182500 00:19:27.957 [2024-11-07 10:47:49.972006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.957 [2024-11-07 10:47:49.972016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004308000 len:0x1000 key:0x182500 00:19:27.957 [2024-11-07 10:47:49.972025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.957 [2024-11-07 10:47:49.972036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004306000 len:0x1000 key:0x182500 00:19:27.957 [2024-11-07 10:47:49.972045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.957 [2024-11-07 10:47:49.972056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004304000 len:0x1000 key:0x182500 00:19:27.957 [2024-11-07 10:47:49.972065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.957 [2024-11-07 10:47:49.972075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004302000 len:0x1000 key:0x182500 00:19:27.957 [2024-11-07 10:47:49.972084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d600000 sqhd:7250 p:0 m:0 dnr:0 00:19:27.957 [2024-11-07 10:47:49.973824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.957 [2024-11-07 10:47:49.973838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.957 [2024-11-07 10:47:49.973846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99648 len:8 PRP1 0x0 PRP2 0x0 00:19:27.957 [2024-11-07 10:47:49.973856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.957 [2024-11-07 10:47:49.973901] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:19:27.957 [2024-11-07 10:47:49.973913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:27.957 [2024-11-07 10:47:49.976676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:27.957 [2024-11-07 10:47:49.990632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] CQ transport error -6 (No such device or address) on qpair id 0 00:19:27.957 [2024-11-07 10:47:50.031145] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:19:27.957 12391.40 IOPS, 48.40 MiB/s [2024-11-07T09:47:55.628Z] 12941.00 IOPS, 50.55 MiB/s [2024-11-07T09:47:55.628Z] 13403.75 IOPS, 52.36 MiB/s [2024-11-07T09:47:55.628Z] 13797.15 IOPS, 53.90 MiB/s [2024-11-07T09:47:55.628Z] 14133.50 IOPS, 55.21 MiB/s [2024-11-07T09:47:55.628Z] 14424.60 IOPS, 56.35 MiB/s 00:19:27.957 Latency(us) 00:19:27.957 [2024-11-07T09:47:55.628Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.957 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:27.957 Verification LBA range: start 0x0 length 0x4000 00:19:27.957 NVMe0n1 : 15.01 14425.01 56.35 321.29 0.00 8657.84 439.09 1046898.28 00:19:27.957 [2024-11-07T09:47:55.628Z] =================================================================================================================== 00:19:27.957 [2024-11-07T09:47:55.628Z] Total : 14425.01 56.35 321.29 0.00 8657.84 439.09 1046898.28 00:19:27.957 Received shutdown signal, test time was about 15.000000 seconds 00:19:27.957 00:19:27.957 Latency(us) 00:19:27.957 [2024-11-07T09:47:55.628Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.957 [2024-11-07T09:47:55.628Z] =================================================================================================================== 00:19:27.957 [2024-11-07T09:47:55.628Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:27.957 10:47:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:19:27.957 10:47:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:19:27.957 10:47:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:19:27.957 10:47:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3839143 00:19:27.957 10:47:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:19:27.957 10:47:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3839143 /var/tmp/bdevperf.sock 00:19:27.957 10:47:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 3839143 ']' 00:19:27.957 10:47:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:27.957 10:47:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:27.957 10:47:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:27.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:27.957 10:47:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:27.957 10:47:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:27.957 10:47:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:27.957 10:47:55 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:19:27.957 10:47:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:19:28.215 [2024-11-07 10:47:55.752720] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:19:28.215 10:47:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:19:28.473 [2024-11-07 10:47:55.937353] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:19:28.473 10:47:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:28.730 NVMe0n1 00:19:28.730 10:47:56 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:28.997 00:19:28.997 10:47:56 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:29.288 00:19:29.288 10:47:56 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:29.288 10:47:56 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:19:29.552 10:47:56 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:29.552 10:47:57 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:19:32.832 10:48:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:32.833 10:48:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:19:32.833 10:48:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3839967 00:19:32.833 10:48:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:32.833 10:48:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3839967 00:19:34.205 { 00:19:34.205 "results": [ 00:19:34.205 { 00:19:34.205 "job": "NVMe0n1", 00:19:34.205 "core_mask": "0x1", 00:19:34.205 "workload": "verify", 00:19:34.205 "status": "finished", 00:19:34.205 "verify_range": { 00:19:34.205 "start": 0, 00:19:34.205 "length": 16384 00:19:34.205 }, 00:19:34.205 "queue_depth": 128, 00:19:34.205 "io_size": 4096, 00:19:34.205 "runtime": 1.00798, 00:19:34.205 "iops": 18159.09045814401, 00:19:34.205 "mibps": 70.93394710212505, 00:19:34.205 "io_failed": 0, 00:19:34.205 "io_timeout": 0, 00:19:34.205 "avg_latency_us": 7011.882606993008, 00:19:34.205 "min_latency_us": 2516.5824, 00:19:34.205 "max_latency_us": 14365.4912 00:19:34.205 } 00:19:34.205 ], 00:19:34.205 "core_count": 1 00:19:34.205 } 00:19:34.205 10:48:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:19:34.205 [2024-11-07 10:47:55.370606] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:19:34.205 [2024-11-07 10:47:55.370659] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3839143 ] 00:19:34.205 [2024-11-07 10:47:55.446862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.205 [2024-11-07 10:47:55.482697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.205 [2024-11-07 10:47:57.131108] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:19:34.205 [2024-11-07 10:47:57.131660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:19:34.205 [2024-11-07 10:47:57.131693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:19:34.205 [2024-11-07 10:47:57.153808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] CQ transport error -6 (No such device or address) on qpair id 0 00:19:34.205 [2024-11-07 10:47:57.170016] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:19:34.205 Running I/O for 1 seconds... 00:19:34.205 18152.00 IOPS, 70.91 MiB/s 00:19:34.205 Latency(us) 00:19:34.205 [2024-11-07T09:48:01.876Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:34.205 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:34.205 Verification LBA range: start 0x0 length 0x4000 00:19:34.205 NVMe0n1 : 1.01 18159.09 70.93 0.00 0.00 7011.88 2516.58 14365.49 00:19:34.205 [2024-11-07T09:48:01.876Z] =================================================================================================================== 00:19:34.205 [2024-11-07T09:48:01.876Z] Total : 18159.09 70.93 0.00 0.00 7011.88 2516.58 14365.49 00:19:34.205 10:48:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:34.205 10:48:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:19:34.205 10:48:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:34.463 10:48:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:34.463 10:48:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:19:34.463 10:48:02 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:34.719 10:48:02 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:19:37.998 10:48:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:37.998 10:48:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:19:37.998 10:48:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3839143 00:19:37.998 10:48:05 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 3839143 ']' 00:19:37.998 10:48:05 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 3839143 00:19:37.998 10:48:05 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:19:37.998 10:48:05 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:37.998 10:48:05 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3839143 00:19:37.998 10:48:05 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:37.998 10:48:05 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:37.998 10:48:05 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3839143' 00:19:37.998 killing process with pid 3839143 00:19:37.998 10:48:05 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 3839143 00:19:37.998 10:48:05 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 3839143 00:19:38.257 10:48:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:19:38.257 10:48:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:38.516 10:48:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:19:38.516 10:48:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:19:38.516 10:48:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:19:38.516 10:48:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:38.516 10:48:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:19:38.516 10:48:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:38.516 10:48:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:38.516 10:48:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:19:38.516 10:48:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:38.516 10:48:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:38.516 rmmod nvme_rdma 00:19:38.516 rmmod nvme_fabrics 00:19:38.516 10:48:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:38.516 10:48:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:19:38.516 10:48:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:19:38.516 10:48:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 3836156 ']' 00:19:38.516 10:48:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 3836156 00:19:38.516 10:48:05 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 3836156 ']' 00:19:38.516 10:48:05 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 3836156 00:19:38.516 10:48:05 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:19:38.516 10:48:06 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:38.516 10:48:06 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3836156 00:19:38.516 10:48:06 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:38.516 10:48:06 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:38.516 10:48:06 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3836156' 00:19:38.516 killing process with pid 3836156 00:19:38.516 10:48:06 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 3836156 00:19:38.516 10:48:06 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 3836156 00:19:38.775 10:48:06 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:38.776 10:48:06 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:19:38.776 00:19:38.776 real 0m36.083s 00:19:38.776 user 1m58.500s 00:19:38.776 sys 0m7.665s 00:19:38.776 10:48:06 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:38.776 10:48:06 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:38.776 ************************************ 00:19:38.776 END TEST nvmf_failover 00:19:38.776 ************************************ 00:19:38.776 10:48:06 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:19:38.776 10:48:06 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:38.776 10:48:06 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:38.776 10:48:06 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.776 ************************************ 00:19:38.776 START TEST nvmf_host_discovery 00:19:38.776 ************************************ 00:19:38.776 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:19:39.035 * Looking for test storage... 00:19:39.035 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:39.035 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:39.035 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:19:39.035 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:39.035 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:39.035 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:39.035 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:39.035 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:39.035 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:19:39.035 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:19:39.035 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:19:39.035 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:19:39.035 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:19:39.035 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:19:39.035 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:19:39.035 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:39.035 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:19:39.035 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:19:39.035 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:39.035 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:39.035 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:19:39.035 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:19:39.035 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:39.035 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:19:39.035 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:19:39.035 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:19:39.035 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:19:39.035 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:39.035 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:19:39.035 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:19:39.035 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:39.035 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:39.035 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:19:39.035 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:39.035 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:39.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.035 --rc genhtml_branch_coverage=1 00:19:39.035 --rc genhtml_function_coverage=1 00:19:39.035 --rc genhtml_legend=1 00:19:39.035 --rc geninfo_all_blocks=1 00:19:39.035 --rc geninfo_unexecuted_blocks=1 00:19:39.035 00:19:39.035 ' 00:19:39.035 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:39.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.035 --rc genhtml_branch_coverage=1 00:19:39.035 --rc genhtml_function_coverage=1 00:19:39.035 --rc genhtml_legend=1 00:19:39.035 --rc geninfo_all_blocks=1 00:19:39.035 --rc geninfo_unexecuted_blocks=1 00:19:39.035 00:19:39.035 ' 00:19:39.035 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:39.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.035 --rc genhtml_branch_coverage=1 00:19:39.035 --rc genhtml_function_coverage=1 00:19:39.035 --rc genhtml_legend=1 00:19:39.035 --rc geninfo_all_blocks=1 00:19:39.035 --rc geninfo_unexecuted_blocks=1 00:19:39.035 00:19:39.035 ' 00:19:39.036 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:39.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.036 --rc genhtml_branch_coverage=1 00:19:39.036 --rc genhtml_function_coverage=1 00:19:39.036 --rc genhtml_legend=1 00:19:39.036 --rc geninfo_all_blocks=1 00:19:39.036 --rc geninfo_unexecuted_blocks=1 00:19:39.036 00:19:39.036 ' 00:19:39.036 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:39.036 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:19:39.036 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:39.036 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:39.036 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:39.036 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:39.036 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:39.036 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:39.036 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:39.036 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:39.036 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:39.036 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:39.036 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:39.036 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:39.036 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:39.036 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:39.036 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:39.036 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:39.036 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:39.036 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:19:39.036 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:39.036 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:39.036 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:39.036 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.036 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.036 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.036 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:19:39.036 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.036 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:19:39.036 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:39.036 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:39.036 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:39.036 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:39.036 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:39.036 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:39.036 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:39.036 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:39.036 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:39.036 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:39.036 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:19:39.036 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:19:39.036 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:19:39.036 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@13 -- # exit 0 00:19:39.036 00:19:39.036 real 0m0.229s 00:19:39.036 user 0m0.123s 00:19:39.036 sys 0m0.123s 00:19:39.036 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:39.036 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:39.036 ************************************ 00:19:39.036 END TEST nvmf_host_discovery 00:19:39.036 ************************************ 00:19:39.036 10:48:06 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:19:39.036 10:48:06 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:39.036 10:48:06 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:39.036 10:48:06 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.036 ************************************ 00:19:39.036 START TEST nvmf_host_multipath_status 00:19:39.036 ************************************ 00:19:39.036 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:19:39.295 * Looking for test storage... 00:19:39.295 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:39.295 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:39.295 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:19:39.295 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:39.295 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:39.295 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:39.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.296 --rc genhtml_branch_coverage=1 00:19:39.296 --rc genhtml_function_coverage=1 00:19:39.296 --rc genhtml_legend=1 00:19:39.296 --rc geninfo_all_blocks=1 00:19:39.296 --rc geninfo_unexecuted_blocks=1 00:19:39.296 00:19:39.296 ' 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:39.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.296 --rc genhtml_branch_coverage=1 00:19:39.296 --rc genhtml_function_coverage=1 00:19:39.296 --rc genhtml_legend=1 00:19:39.296 --rc geninfo_all_blocks=1 00:19:39.296 --rc geninfo_unexecuted_blocks=1 00:19:39.296 00:19:39.296 ' 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:39.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.296 --rc genhtml_branch_coverage=1 00:19:39.296 --rc genhtml_function_coverage=1 00:19:39.296 --rc genhtml_legend=1 00:19:39.296 --rc geninfo_all_blocks=1 00:19:39.296 --rc geninfo_unexecuted_blocks=1 00:19:39.296 00:19:39.296 ' 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:39.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.296 --rc genhtml_branch_coverage=1 00:19:39.296 --rc genhtml_function_coverage=1 00:19:39.296 --rc genhtml_legend=1 00:19:39.296 --rc geninfo_all_blocks=1 00:19:39.296 --rc geninfo_unexecuted_blocks=1 00:19:39.296 00:19:39.296 ' 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:39.296 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:39.296 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:39.297 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/bpftrace.sh 00:19:39.297 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:39.297 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:19:39.297 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:19:39.297 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:19:39.297 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:39.297 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:39.297 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:39.297 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:39.297 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.297 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:39.297 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.297 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:39.297 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:39.297 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:19:39.297 10:48:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:45.859 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:45.859 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:45.859 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:45.859 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:45.860 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # rdma_device_init 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # uname 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:45.860 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:45.860 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:45.860 altname enp217s0f0np0 00:19:45.860 altname ens818f0np0 00:19:45.860 inet 192.168.100.8/24 scope global mlx_0_0 00:19:45.860 valid_lft forever preferred_lft forever 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:45.860 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:45.860 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:45.860 altname enp217s0f1np1 00:19:45.860 altname ens818f1np1 00:19:45.860 inet 192.168.100.9/24 scope global mlx_0_1 00:19:45.860 valid_lft forever preferred_lft forever 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:45.860 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:46.119 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:46.119 192.168.100.9' 00:19:46.119 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:46.119 192.168.100.9' 00:19:46.119 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # head -n 1 00:19:46.119 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:46.119 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:46.119 192.168.100.9' 00:19:46.119 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # tail -n +2 00:19:46.119 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # head -n 1 00:19:46.119 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:46.119 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:46.119 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:46.119 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:46.119 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:46.119 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:46.119 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:19:46.119 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:46.119 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:46.119 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:46.119 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=3844717 00:19:46.119 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 3844717 00:19:46.119 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:46.119 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 3844717 ']' 00:19:46.119 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.119 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:46.119 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.119 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:46.119 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:46.119 [2024-11-07 10:48:13.632741] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:19:46.119 [2024-11-07 10:48:13.632789] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:46.119 [2024-11-07 10:48:13.710742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:46.119 [2024-11-07 10:48:13.750083] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:46.119 [2024-11-07 10:48:13.750121] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:46.119 [2024-11-07 10:48:13.750131] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:46.119 [2024-11-07 10:48:13.750140] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:46.119 [2024-11-07 10:48:13.750146] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:46.119 [2024-11-07 10:48:13.751378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:46.119 [2024-11-07 10:48:13.751387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.378 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:46.378 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:19:46.378 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:46.378 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:46.378 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:46.378 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:46.378 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3844717 00:19:46.378 10:48:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:46.636 [2024-11-07 10:48:14.090025] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1c08730/0x1c0cc20) succeed. 00:19:46.636 [2024-11-07 10:48:14.099088] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1c09c80/0x1c4e2c0) succeed. 00:19:46.636 10:48:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:46.895 Malloc0 00:19:46.895 10:48:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:19:47.153 10:48:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:47.153 10:48:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:47.411 [2024-11-07 10:48:14.914271] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:47.411 10:48:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:19:47.669 [2024-11-07 10:48:15.094521] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:19:47.669 10:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:19:47.669 10:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3845000 00:19:47.669 10:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:47.669 10:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3845000 /var/tmp/bdevperf.sock 00:19:47.669 10:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 3845000 ']' 00:19:47.670 10:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:47.670 10:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:47.670 10:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:47.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:47.670 10:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:47.670 10:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:47.927 10:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:47.927 10:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:19:47.927 10:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:47.927 10:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:48.186 Nvme0n1 00:19:48.186 10:48:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:48.444 Nvme0n1 00:19:48.444 10:48:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:19:48.444 10:48:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:19:50.973 10:48:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:19:50.973 10:48:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:19:50.973 10:48:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:19:50.973 10:48:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:19:51.907 10:48:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:19:51.907 10:48:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:51.907 10:48:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:51.907 10:48:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:52.165 10:48:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:52.165 10:48:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:52.165 10:48:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:52.165 10:48:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:52.423 10:48:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:52.424 10:48:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:52.424 10:48:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:52.424 10:48:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:52.424 10:48:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:52.424 10:48:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:52.424 10:48:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:52.424 10:48:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:52.682 10:48:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:52.682 10:48:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:52.682 10:48:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:52.682 10:48:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:52.940 10:48:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:52.940 10:48:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:52.940 10:48:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:52.940 10:48:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:53.198 10:48:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:53.199 10:48:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:19:53.199 10:48:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:19:53.199 10:48:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:19:53.456 10:48:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:19:54.389 10:48:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:19:54.389 10:48:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:54.389 10:48:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:54.389 10:48:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:54.647 10:48:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:54.647 10:48:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:54.647 10:48:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:54.647 10:48:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:54.905 10:48:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:54.905 10:48:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:54.905 10:48:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:54.905 10:48:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:55.163 10:48:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:55.164 10:48:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:55.164 10:48:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:55.164 10:48:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:55.164 10:48:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:55.164 10:48:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:55.164 10:48:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:55.164 10:48:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:55.422 10:48:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:55.422 10:48:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:55.422 10:48:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:55.422 10:48:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:55.680 10:48:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:55.680 10:48:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:19:55.680 10:48:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:19:55.938 10:48:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:19:55.938 10:48:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:19:57.334 10:48:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:19:57.334 10:48:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:57.334 10:48:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:57.334 10:48:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:57.334 10:48:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:57.335 10:48:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:57.335 10:48:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:57.335 10:48:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:57.335 10:48:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:57.335 10:48:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:57.335 10:48:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:57.335 10:48:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:57.593 10:48:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:57.593 10:48:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:57.593 10:48:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:57.593 10:48:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:57.851 10:48:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:57.851 10:48:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:57.851 10:48:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:57.851 10:48:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:58.110 10:48:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:58.110 10:48:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:58.110 10:48:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:58.110 10:48:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:58.110 10:48:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:58.110 10:48:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:19:58.110 10:48:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:19:58.368 10:48:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:19:58.626 10:48:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:19:59.560 10:48:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:19:59.560 10:48:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:59.560 10:48:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:59.560 10:48:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:59.818 10:48:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:59.818 10:48:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:59.818 10:48:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:59.818 10:48:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:00.076 10:48:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:00.076 10:48:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:00.076 10:48:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:00.076 10:48:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:00.076 10:48:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:00.076 10:48:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:00.076 10:48:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:00.076 10:48:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:00.335 10:48:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:00.335 10:48:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:00.335 10:48:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:00.335 10:48:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:00.593 10:48:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:00.593 10:48:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:00.593 10:48:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:00.593 10:48:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:00.877 10:48:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:00.877 10:48:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:20:00.877 10:48:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:20:00.877 10:48:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:20:01.163 10:48:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:20:02.107 10:48:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:20:02.107 10:48:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:02.107 10:48:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:02.107 10:48:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:02.365 10:48:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:02.365 10:48:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:02.365 10:48:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:02.365 10:48:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:02.624 10:48:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:02.624 10:48:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:02.624 10:48:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:02.624 10:48:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:02.624 10:48:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:02.624 10:48:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:02.624 10:48:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:02.624 10:48:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:02.883 10:48:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:02.883 10:48:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:20:02.883 10:48:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:02.883 10:48:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:03.141 10:48:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:03.141 10:48:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:03.141 10:48:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:03.141 10:48:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:03.400 10:48:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:03.400 10:48:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:20:03.400 10:48:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:20:03.400 10:48:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:20:03.658 10:48:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:20:04.591 10:48:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:20:04.591 10:48:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:04.591 10:48:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:04.591 10:48:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:04.849 10:48:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:04.849 10:48:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:04.849 10:48:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:04.849 10:48:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:05.108 10:48:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:05.108 10:48:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:05.108 10:48:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:05.108 10:48:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:05.367 10:48:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:05.367 10:48:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:05.367 10:48:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:05.367 10:48:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:05.367 10:48:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:05.367 10:48:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:20:05.367 10:48:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:05.367 10:48:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:05.626 10:48:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:05.626 10:48:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:05.626 10:48:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:05.626 10:48:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:05.884 10:48:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:05.884 10:48:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:20:06.142 10:48:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:20:06.142 10:48:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:20:06.142 10:48:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:20:06.400 10:48:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:20:07.335 10:48:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:20:07.335 10:48:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:07.335 10:48:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:07.335 10:48:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:07.593 10:48:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:07.593 10:48:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:07.593 10:48:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:07.593 10:48:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:07.851 10:48:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:07.852 10:48:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:07.852 10:48:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:07.852 10:48:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:08.110 10:48:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:08.110 10:48:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:08.110 10:48:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:08.110 10:48:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:08.110 10:48:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:08.110 10:48:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:08.110 10:48:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:08.110 10:48:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:08.388 10:48:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:08.388 10:48:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:08.388 10:48:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:08.388 10:48:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:08.646 10:48:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:08.646 10:48:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:20:08.646 10:48:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:20:08.646 10:48:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:20:08.905 10:48:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:20:10.280 10:48:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:20:10.280 10:48:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:10.280 10:48:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:10.280 10:48:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:10.280 10:48:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:10.280 10:48:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:10.280 10:48:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:10.280 10:48:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:10.280 10:48:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:10.280 10:48:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:10.280 10:48:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:10.280 10:48:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:10.539 10:48:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:10.539 10:48:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:10.539 10:48:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:10.539 10:48:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:10.797 10:48:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:10.797 10:48:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:10.797 10:48:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:10.797 10:48:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:10.797 10:48:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:11.056 10:48:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:11.056 10:48:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:11.056 10:48:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:11.056 10:48:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:11.056 10:48:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:20:11.056 10:48:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:20:11.314 10:48:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:20:11.572 10:48:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:20:12.507 10:48:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:20:12.507 10:48:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:12.507 10:48:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:12.507 10:48:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:12.765 10:48:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:12.765 10:48:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:12.765 10:48:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:12.765 10:48:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:13.023 10:48:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:13.023 10:48:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:13.023 10:48:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:13.023 10:48:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:13.023 10:48:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:13.023 10:48:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:13.023 10:48:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:13.023 10:48:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:13.282 10:48:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:13.282 10:48:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:13.282 10:48:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:13.282 10:48:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:13.540 10:48:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:13.540 10:48:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:13.540 10:48:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:13.540 10:48:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:13.798 10:48:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:13.798 10:48:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:20:13.798 10:48:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:20:13.798 10:48:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:20:14.057 10:48:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:20:14.992 10:48:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:20:14.992 10:48:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:14.992 10:48:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:14.992 10:48:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:15.250 10:48:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:15.250 10:48:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:15.250 10:48:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:15.251 10:48:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:15.509 10:48:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:15.509 10:48:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:15.509 10:48:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:15.509 10:48:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:15.768 10:48:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:15.768 10:48:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:15.768 10:48:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:15.768 10:48:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:15.768 10:48:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:15.768 10:48:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:15.768 10:48:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:15.768 10:48:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:16.026 10:48:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:16.026 10:48:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:16.026 10:48:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:16.026 10:48:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:16.285 10:48:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:16.285 10:48:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3845000 00:20:16.285 10:48:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 3845000 ']' 00:20:16.285 10:48:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 3845000 00:20:16.285 10:48:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:20:16.285 10:48:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:16.285 10:48:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3845000 00:20:16.285 10:48:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:16.285 10:48:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:16.285 10:48:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3845000' 00:20:16.285 killing process with pid 3845000 00:20:16.285 10:48:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 3845000 00:20:16.285 10:48:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 3845000 00:20:16.285 { 00:20:16.285 "results": [ 00:20:16.285 { 00:20:16.285 "job": "Nvme0n1", 00:20:16.285 "core_mask": "0x4", 00:20:16.285 "workload": "verify", 00:20:16.285 "status": "terminated", 00:20:16.285 "verify_range": { 00:20:16.285 "start": 0, 00:20:16.285 "length": 16384 00:20:16.285 }, 00:20:16.285 "queue_depth": 128, 00:20:16.285 "io_size": 4096, 00:20:16.285 "runtime": 27.618855, 00:20:16.285 "iops": 16155.014391436574, 00:20:16.285 "mibps": 63.10552496654912, 00:20:16.285 "io_failed": 0, 00:20:16.285 "io_timeout": 0, 00:20:16.285 "avg_latency_us": 7902.874339570983, 00:20:16.285 "min_latency_us": 58.5728, 00:20:16.285 "max_latency_us": 3019898.88 00:20:16.285 } 00:20:16.285 ], 00:20:16.285 "core_count": 1 00:20:16.285 } 00:20:16.570 10:48:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3845000 00:20:16.570 10:48:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:16.570 [2024-11-07 10:48:15.151784] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:20:16.570 [2024-11-07 10:48:15.151836] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3845000 ] 00:20:16.570 [2024-11-07 10:48:15.223463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.570 [2024-11-07 10:48:15.262970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:16.570 Running I/O for 90 seconds... 00:20:16.570 18560.00 IOPS, 72.50 MiB/s [2024-11-07T09:48:44.241Z] 18705.50 IOPS, 73.07 MiB/s [2024-11-07T09:48:44.241Z] 18752.33 IOPS, 73.25 MiB/s [2024-11-07T09:48:44.241Z] 18752.00 IOPS, 73.25 MiB/s [2024-11-07T09:48:44.241Z] 18760.80 IOPS, 73.28 MiB/s [2024-11-07T09:48:44.241Z] 18794.67 IOPS, 73.42 MiB/s [2024-11-07T09:48:44.241Z] 18817.71 IOPS, 73.51 MiB/s [2024-11-07T09:48:44.241Z] 18831.12 IOPS, 73.56 MiB/s [2024-11-07T09:48:44.241Z] 18829.44 IOPS, 73.55 MiB/s [2024-11-07T09:48:44.241Z] 18828.10 IOPS, 73.55 MiB/s [2024-11-07T09:48:44.241Z] 18831.73 IOPS, 73.56 MiB/s [2024-11-07T09:48:44.241Z] 18829.42 IOPS, 73.55 MiB/s [2024-11-07T09:48:44.241Z] [2024-11-07 10:48:28.470456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.570 [2024-11-07 10:48:28.470499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:16.570 [2024-11-07 10:48:28.470539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.570 [2024-11-07 10:48:28.470550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:16.570 [2024-11-07 10:48:28.470764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.570 [2024-11-07 10:48:28.470776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:16.570 [2024-11-07 10:48:28.470789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.570 [2024-11-07 10:48:28.470798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:16.570 [2024-11-07 10:48:28.470811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.570 [2024-11-07 10:48:28.470821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:16.570 [2024-11-07 10:48:28.470834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.570 [2024-11-07 10:48:28.470843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:16.570 [2024-11-07 10:48:28.470855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.570 [2024-11-07 10:48:28.470864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:16.570 [2024-11-07 10:48:28.470876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.570 [2024-11-07 10:48:28.470886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:16.570 [2024-11-07 10:48:28.470898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.570 [2024-11-07 10:48:28.470908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:16.570 [2024-11-07 10:48:28.470919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:13248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.570 [2024-11-07 10:48:28.470935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:16.570 [2024-11-07 10:48:28.470949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x183800 00:20:16.570 [2024-11-07 10:48:28.470959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:16.570 [2024-11-07 10:48:28.470971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x183800 00:20:16.570 [2024-11-07 10:48:28.470980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:16.570 [2024-11-07 10:48:28.470991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x183800 00:20:16.570 [2024-11-07 10:48:28.471000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:16.570 [2024-11-07 10:48:28.471011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x183800 00:20:16.570 [2024-11-07 10:48:28.471020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:16.570 [2024-11-07 10:48:28.471031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a0000 len:0x1000 key:0x183800 00:20:16.570 [2024-11-07 10:48:28.471040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:16.570 [2024-11-07 10:48:28.471051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x183800 00:20:16.570 [2024-11-07 10:48:28.471060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:16.570 [2024-11-07 10:48:28.471071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x183800 00:20:16.570 [2024-11-07 10:48:28.471080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:16.570 [2024-11-07 10:48:28.471091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x183800 00:20:16.570 [2024-11-07 10:48:28.471101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:16.570 [2024-11-07 10:48:28.471112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x183800 00:20:16.570 [2024-11-07 10:48:28.471121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:16.570 [2024-11-07 10:48:28.471133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x183800 00:20:16.570 [2024-11-07 10:48:28.471145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:16.570 [2024-11-07 10:48:28.471156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x183800 00:20:16.570 [2024-11-07 10:48:28.471168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:16.570 [2024-11-07 10:48:28.471180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x183800 00:20:16.570 [2024-11-07 10:48:28.471189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:16.570 [2024-11-07 10:48:28.471200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004304000 len:0x1000 key:0x183800 00:20:16.570 [2024-11-07 10:48:28.471209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:16.570 [2024-11-07 10:48:28.471221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004306000 len:0x1000 key:0x183800 00:20:16.570 [2024-11-07 10:48:28.471230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:16.570 [2024-11-07 10:48:28.471243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004308000 len:0x1000 key:0x183800 00:20:16.570 [2024-11-07 10:48:28.471252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:16.571 [2024-11-07 10:48:28.471264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430a000 len:0x1000 key:0x183800 00:20:16.571 [2024-11-07 10:48:28.471273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:16.571 [2024-11-07 10:48:28.471285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433a000 len:0x1000 key:0x183800 00:20:16.571 [2024-11-07 10:48:28.471294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:16.571 [2024-11-07 10:48:28.471306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004338000 len:0x1000 key:0x183800 00:20:16.571 [2024-11-07 10:48:28.471315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:16.571 [2024-11-07 10:48:28.471326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004336000 len:0x1000 key:0x183800 00:20:16.571 [2024-11-07 10:48:28.471335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:16.571 [2024-11-07 10:48:28.471346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x183800 00:20:16.571 [2024-11-07 10:48:28.471355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:16.571 [2024-11-07 10:48:28.471366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004332000 len:0x1000 key:0x183800 00:20:16.571 [2024-11-07 10:48:28.471375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:16.571 [2024-11-07 10:48:28.471386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x183800 00:20:16.571 [2024-11-07 10:48:28.471395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:16.571 [2024-11-07 10:48:28.471409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f0000 len:0x1000 key:0x183800 00:20:16.571 [2024-11-07 10:48:28.471417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:16.571 [2024-11-07 10:48:28.471429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ee000 len:0x1000 key:0x183800 00:20:16.571 [2024-11-07 10:48:28.471437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:16.571 [2024-11-07 10:48:28.471450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ec000 len:0x1000 key:0x183800 00:20:16.571 [2024-11-07 10:48:28.471459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:16.571 [2024-11-07 10:48:28.471471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431e000 len:0x1000 key:0x183800 00:20:16.571 [2024-11-07 10:48:28.471481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:16.571 [2024-11-07 10:48:28.471492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e8000 len:0x1000 key:0x183800 00:20:16.571 [2024-11-07 10:48:28.471501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:16.571 [2024-11-07 10:48:28.471516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e6000 len:0x1000 key:0x183800 00:20:16.571 [2024-11-07 10:48:28.471525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:16.571 [2024-11-07 10:48:28.471536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e4000 len:0x1000 key:0x183800 00:20:16.571 [2024-11-07 10:48:28.471545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:16.571 [2024-11-07 10:48:28.471556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004326000 len:0x1000 key:0x183800 00:20:16.571 [2024-11-07 10:48:28.471565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:16.571 [2024-11-07 10:48:28.471576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x183800 00:20:16.571 [2024-11-07 10:48:28.471585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:16.571 [2024-11-07 10:48:28.471597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b4000 len:0x1000 key:0x183800 00:20:16.571 [2024-11-07 10:48:28.471605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:16.571 [2024-11-07 10:48:28.471617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b2000 len:0x1000 key:0x183800 00:20:16.571 [2024-11-07 10:48:28.471626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:16.571 [2024-11-07 10:48:28.471639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b0000 len:0x1000 key:0x183800 00:20:16.571 [2024-11-07 10:48:28.471648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:16.571 [2024-11-07 10:48:28.471659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x183800 00:20:16.571 [2024-11-07 10:48:28.471669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:16.571 [2024-11-07 10:48:28.471680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434e000 len:0x1000 key:0x183800 00:20:16.571 [2024-11-07 10:48:28.471689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:16.571 [2024-11-07 10:48:28.471700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x183800 00:20:16.571 [2024-11-07 10:48:28.471709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:16.571 [2024-11-07 10:48:28.471720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434a000 len:0x1000 key:0x183800 00:20:16.571 [2024-11-07 10:48:28.471729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:16.571 [2024-11-07 10:48:28.471741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004348000 len:0x1000 key:0x183800 00:20:16.571 [2024-11-07 10:48:28.471750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:16.571 [2024-11-07 10:48:28.471761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433c000 len:0x1000 key:0x183800 00:20:16.571 [2024-11-07 10:48:28.471769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:16.571 [2024-11-07 10:48:28.471781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433e000 len:0x1000 key:0x183800 00:20:16.571 [2024-11-07 10:48:28.471790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:16.571 [2024-11-07 10:48:28.471801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004340000 len:0x1000 key:0x183800 00:20:16.571 [2024-11-07 10:48:28.471811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:16.571 [2024-11-07 10:48:28.471823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x183800 00:20:16.571 [2024-11-07 10:48:28.471832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:16.571 [2024-11-07 10:48:28.471843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004344000 len:0x1000 key:0x183800 00:20:16.571 [2024-11-07 10:48:28.471852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:16.571 [2024-11-07 10:48:28.471863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004346000 len:0x1000 key:0x183800 00:20:16.571 [2024-11-07 10:48:28.471874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:16.571 [2024-11-07 10:48:28.471885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d0000 len:0x1000 key:0x183800 00:20:16.571 [2024-11-07 10:48:28.471894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:16.571 [2024-11-07 10:48:28.472012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d2000 len:0x1000 key:0x183800 00:20:16.571 [2024-11-07 10:48:28.472023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:16.571 [2024-11-07 10:48:28.472037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d4000 len:0x1000 key:0x183800 00:20:16.571 [2024-11-07 10:48:28.472047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:16.571 [2024-11-07 10:48:28.472061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d6000 len:0x1000 key:0x183800 00:20:16.571 [2024-11-07 10:48:28.472070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:16.571 [2024-11-07 10:48:28.472084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.571 [2024-11-07 10:48:28.472093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:16.571 [2024-11-07 10:48:28.472107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.571 [2024-11-07 10:48:28.472116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:16.571 [2024-11-07 10:48:28.472422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.572 [2024-11-07 10:48:28.472432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:16.572 [2024-11-07 10:48:28.472446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.572 [2024-11-07 10:48:28.472455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:16.572 [2024-11-07 10:48:28.472469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.572 [2024-11-07 10:48:28.472479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:16.572 [2024-11-07 10:48:28.472493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.572 [2024-11-07 10:48:28.472502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:16.572 [2024-11-07 10:48:28.472519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.572 [2024-11-07 10:48:28.472528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:16.572 [2024-11-07 10:48:28.472558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.572 [2024-11-07 10:48:28.472570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:16.572 [2024-11-07 10:48:28.472584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.572 [2024-11-07 10:48:28.472595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:16.572 [2024-11-07 10:48:28.472609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.572 [2024-11-07 10:48:28.472618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:16.572 [2024-11-07 10:48:28.472632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c4000 len:0x1000 key:0x183800 00:20:16.572 [2024-11-07 10:48:28.472641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:16.572 [2024-11-07 10:48:28.472656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c6000 len:0x1000 key:0x183800 00:20:16.572 [2024-11-07 10:48:28.472665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:16.572 [2024-11-07 10:48:28.472680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c8000 len:0x1000 key:0x183800 00:20:16.572 [2024-11-07 10:48:28.472690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:16.572 [2024-11-07 10:48:28.472704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ca000 len:0x1000 key:0x183800 00:20:16.572 [2024-11-07 10:48:28.472713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:16.572 [2024-11-07 10:48:28.472728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cc000 len:0x1000 key:0x183800 00:20:16.572 [2024-11-07 10:48:28.472737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:16.572 [2024-11-07 10:48:28.472752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x183800 00:20:16.572 [2024-11-07 10:48:28.472761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:16.572 [2024-11-07 10:48:28.472775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c2000 len:0x1000 key:0x183800 00:20:16.572 [2024-11-07 10:48:28.472784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:16.572 [2024-11-07 10:48:28.472799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x183800 00:20:16.572 [2024-11-07 10:48:28.472808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:16.572 [2024-11-07 10:48:28.472823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043be000 len:0x1000 key:0x183800 00:20:16.572 [2024-11-07 10:48:28.472832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:16.572 [2024-11-07 10:48:28.472848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bc000 len:0x1000 key:0x183800 00:20:16.572 [2024-11-07 10:48:28.472858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:16.572 [2024-11-07 10:48:28.472873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d8000 len:0x1000 key:0x183800 00:20:16.572 [2024-11-07 10:48:28.472882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:16.572 [2024-11-07 10:48:28.472897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043da000 len:0x1000 key:0x183800 00:20:16.572 [2024-11-07 10:48:28.472906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:16.572 [2024-11-07 10:48:28.472920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dc000 len:0x1000 key:0x183800 00:20:16.572 [2024-11-07 10:48:28.472929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:16.572 [2024-11-07 10:48:28.472944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043de000 len:0x1000 key:0x183800 00:20:16.572 [2024-11-07 10:48:28.472953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:16.572 [2024-11-07 10:48:28.472967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004300000 len:0x1000 key:0x183800 00:20:16.572 [2024-11-07 10:48:28.472978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:16.572 [2024-11-07 10:48:28.472993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.572 [2024-11-07 10:48:28.473002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:16.572 [2024-11-07 10:48:28.473017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.572 [2024-11-07 10:48:28.473025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:16.572 [2024-11-07 10:48:28.473040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.572 [2024-11-07 10:48:28.473049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:16.572 [2024-11-07 10:48:28.473064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.572 [2024-11-07 10:48:28.473072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:16.572 [2024-11-07 10:48:28.473086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.572 [2024-11-07 10:48:28.473095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:16.572 [2024-11-07 10:48:28.473110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ba000 len:0x1000 key:0x183800 00:20:16.572 [2024-11-07 10:48:28.473121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:16.572 [2024-11-07 10:48:28.473136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432e000 len:0x1000 key:0x183800 00:20:16.572 [2024-11-07 10:48:28.473145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:16.572 [2024-11-07 10:48:28.473160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432c000 len:0x1000 key:0x183800 00:20:16.572 [2024-11-07 10:48:28.473169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:16.572 [2024-11-07 10:48:28.473184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432a000 len:0x1000 key:0x183800 00:20:16.572 [2024-11-07 10:48:28.473195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:16.572 [2024-11-07 10:48:28.473212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x183800 00:20:16.572 [2024-11-07 10:48:28.473221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:16.572 [2024-11-07 10:48:28.473237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004302000 len:0x1000 key:0x183800 00:20:16.572 [2024-11-07 10:48:28.473246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:16.572 [2024-11-07 10:48:28.473261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.572 [2024-11-07 10:48:28.473270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:16.572 [2024-11-07 10:48:28.473284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.572 [2024-11-07 10:48:28.473294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:16.572 [2024-11-07 10:48:28.473309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e0000 len:0x1000 key:0x183800 00:20:16.572 [2024-11-07 10:48:28.473318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:16.572 [2024-11-07 10:48:28.473332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e2000 len:0x1000 key:0x183800 00:20:16.573 [2024-11-07 10:48:28.473341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:16.573 [2024-11-07 10:48:28.473356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431a000 len:0x1000 key:0x183800 00:20:16.573 [2024-11-07 10:48:28.473365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:16.573 [2024-11-07 10:48:28.473379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004318000 len:0x1000 key:0x183800 00:20:16.573 [2024-11-07 10:48:28.473388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:16.573 [2024-11-07 10:48:28.473404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004316000 len:0x1000 key:0x183800 00:20:16.573 [2024-11-07 10:48:28.473413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:16.573 [2024-11-07 10:48:28.473429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004314000 len:0x1000 key:0x183800 00:20:16.573 [2024-11-07 10:48:28.473438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:16.573 [2024-11-07 10:48:28.473452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.573 [2024-11-07 10:48:28.473461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:16.573 [2024-11-07 10:48:28.473475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.573 [2024-11-07 10:48:28.473485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:16.573 [2024-11-07 10:48:28.473499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.573 [2024-11-07 10:48:28.473512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.573 [2024-11-07 10:48:28.473527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.573 [2024-11-07 10:48:28.473536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:16.573 [2024-11-07 10:48:28.473550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.573 [2024-11-07 10:48:28.473559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:16.573 [2024-11-07 10:48:28.473574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431c000 len:0x1000 key:0x183800 00:20:16.573 [2024-11-07 10:48:28.473585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:16.573 [2024-11-07 10:48:28.473599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004324000 len:0x1000 key:0x183800 00:20:16.573 [2024-11-07 10:48:28.473609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:16.573 [2024-11-07 10:48:28.473623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004322000 len:0x1000 key:0x183800 00:20:16.573 [2024-11-07 10:48:28.473633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:16.573 [2024-11-07 10:48:28.473647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004320000 len:0x1000 key:0x183800 00:20:16.573 [2024-11-07 10:48:28.473656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:16.573 [2024-11-07 10:48:28.473671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ea000 len:0x1000 key:0x183800 00:20:16.573 [2024-11-07 10:48:28.473682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:16.573 [2024-11-07 10:48:28.473697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004312000 len:0x1000 key:0x183800 00:20:16.573 [2024-11-07 10:48:28.473706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:16.573 [2024-11-07 10:48:28.473721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:13064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004310000 len:0x1000 key:0x183800 00:20:16.573 [2024-11-07 10:48:28.473730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:16.573 [2024-11-07 10:48:28.473744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430e000 len:0x1000 key:0x183800 00:20:16.573 [2024-11-07 10:48:28.473753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:16.573 [2024-11-07 10:48:28.473768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430c000 len:0x1000 key:0x183800 00:20:16.573 [2024-11-07 10:48:28.473777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:16.573 [2024-11-07 10:48:28.473792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x183800 00:20:16.573 [2024-11-07 10:48:28.473801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:16.573 [2024-11-07 10:48:28.473815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x183800 00:20:16.573 [2024-11-07 10:48:28.473825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:16.573 [2024-11-07 10:48:28.473839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x183800 00:20:16.573 [2024-11-07 10:48:28.473848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:16.573 [2024-11-07 10:48:28.473863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x183800 00:20:16.573 [2024-11-07 10:48:28.473872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:16.573 [2024-11-07 10:48:28.473887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b8000 len:0x1000 key:0x183800 00:20:16.573 [2024-11-07 10:48:28.473896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:16.573 [2024-11-07 10:48:28.473910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b6000 len:0x1000 key:0x183800 00:20:16.573 [2024-11-07 10:48:28.473919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:16.573 [2024-11-07 10:48:28.473933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x183800 00:20:16.573 [2024-11-07 10:48:28.473943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:16.573 [2024-11-07 10:48:28.473959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a8000 len:0x1000 key:0x183800 00:20:16.573 [2024-11-07 10:48:28.473969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:16.573 [2024-11-07 10:48:28.473984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a6000 len:0x1000 key:0x183800 00:20:16.573 [2024-11-07 10:48:28.473993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:16.573 [2024-11-07 10:48:28.474008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x183800 00:20:16.573 [2024-11-07 10:48:28.474017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:16.573 [2024-11-07 10:48:28.474031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x183800 00:20:16.573 [2024-11-07 10:48:28.474040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:16.573 17763.62 IOPS, 69.39 MiB/s [2024-11-07T09:48:44.244Z] 16494.79 IOPS, 64.43 MiB/s [2024-11-07T09:48:44.244Z] 15395.13 IOPS, 60.14 MiB/s [2024-11-07T09:48:44.244Z] 15307.50 IOPS, 59.79 MiB/s [2024-11-07T09:48:44.244Z] 15522.71 IOPS, 60.64 MiB/s [2024-11-07T09:48:44.244Z] 15622.50 IOPS, 61.03 MiB/s [2024-11-07T09:48:44.244Z] 15607.79 IOPS, 60.97 MiB/s [2024-11-07T09:48:44.244Z] 15592.60 IOPS, 60.91 MiB/s [2024-11-07T09:48:44.244Z] 15741.90 IOPS, 61.49 MiB/s [2024-11-07T09:48:44.244Z] 15888.05 IOPS, 62.06 MiB/s [2024-11-07T09:48:44.244Z] 15993.96 IOPS, 62.48 MiB/s [2024-11-07T09:48:44.244Z] 15960.96 IOPS, 62.35 MiB/s [2024-11-07T09:48:44.244Z] 15930.60 IOPS, 62.23 MiB/s [2024-11-07T09:48:44.244Z] [2024-11-07 10:48:41.621385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:86496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x183800 00:20:16.573 [2024-11-07 10:48:41.621426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:16.573 [2024-11-07 10:48:41.621458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:87072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.573 [2024-11-07 10:48:41.621469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:16.573 [2024-11-07 10:48:41.621481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:86528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043da000 len:0x1000 key:0x183800 00:20:16.573 [2024-11-07 10:48:41.621491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:16.573 [2024-11-07 10:48:41.621502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:87088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.573 [2024-11-07 10:48:41.621515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:16.573 [2024-11-07 10:48:41.621526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:87096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.573 [2024-11-07 10:48:41.621535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:16.573 [2024-11-07 10:48:41.621546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:87104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.574 [2024-11-07 10:48:41.621555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:16.574 [2024-11-07 10:48:41.621572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:86600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x183800 00:20:16.574 [2024-11-07 10:48:41.621581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:16.574 [2024-11-07 10:48:41.621990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:86624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x183800 00:20:16.574 [2024-11-07 10:48:41.622002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:16.574 [2024-11-07 10:48:41.622014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:86640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004324000 len:0x1000 key:0x183800 00:20:16.574 [2024-11-07 10:48:41.622023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:16.574 [2024-11-07 10:48:41.622034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:87128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.574 [2024-11-07 10:48:41.622043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:16.574 [2024-11-07 10:48:41.622055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:87136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.574 [2024-11-07 10:48:41.622063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:16.574 [2024-11-07 10:48:41.622074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:87152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.574 [2024-11-07 10:48:41.622083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:16.574 [2024-11-07 10:48:41.622094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:86696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a6000 len:0x1000 key:0x183800 00:20:16.574 [2024-11-07 10:48:41.622103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:16.574 [2024-11-07 10:48:41.622113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:87168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.574 [2024-11-07 10:48:41.622122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:16.574 [2024-11-07 10:48:41.622149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:86736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x183800 00:20:16.574 [2024-11-07 10:48:41.622158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:16.574 [2024-11-07 10:48:41.622170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:86744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d0000 len:0x1000 key:0x183800 00:20:16.574 [2024-11-07 10:48:41.622179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:16.574 [2024-11-07 10:48:41.622191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:86560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004338000 len:0x1000 key:0x183800 00:20:16.574 [2024-11-07 10:48:41.622200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:16.574 [2024-11-07 10:48:41.622211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:87184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.574 [2024-11-07 10:48:41.622221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:16.574 [2024-11-07 10:48:41.622238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:86608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434e000 len:0x1000 key:0x183800 00:20:16.574 [2024-11-07 10:48:41.622248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:16.574 [2024-11-07 10:48:41.622260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:86632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x183800 00:20:16.574 [2024-11-07 10:48:41.622269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:16.574 [2024-11-07 10:48:41.622281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:87208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.574 [2024-11-07 10:48:41.622289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.574 [2024-11-07 10:48:41.622301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:86648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x183800 00:20:16.574 [2024-11-07 10:48:41.622310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:16.574 [2024-11-07 10:48:41.622322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004316000 len:0x1000 key:0x183800 00:20:16.574 [2024-11-07 10:48:41.622331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:16.574 [2024-11-07 10:48:41.622343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:86680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x183800 00:20:16.574 [2024-11-07 10:48:41.622351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:16.574 [2024-11-07 10:48:41.622363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:86704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c8000 len:0x1000 key:0x183800 00:20:16.574 [2024-11-07 10:48:41.622372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:16.574 [2024-11-07 10:48:41.622384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:87240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.574 [2024-11-07 10:48:41.622395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:16.574 [2024-11-07 10:48:41.622406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:87248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.574 [2024-11-07 10:48:41.622415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:16.574 [2024-11-07 10:48:41.622426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:87264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.574 [2024-11-07 10:48:41.622435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:16.574 [2024-11-07 10:48:41.622446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:87280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.574 [2024-11-07 10:48:41.622455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:16.574 [2024-11-07 10:48:41.622467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:86752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004312000 len:0x1000 key:0x183800 00:20:16.574 [2024-11-07 10:48:41.622478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:16.574 [2024-11-07 10:48:41.622489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:87296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.574 [2024-11-07 10:48:41.622498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:16.574 [2024-11-07 10:48:41.622513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:86808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430e000 len:0x1000 key:0x183800 00:20:16.574 [2024-11-07 10:48:41.622523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:16.574 [2024-11-07 10:48:41.622535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:86840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x183800 00:20:16.574 [2024-11-07 10:48:41.622544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:16.574 [2024-11-07 10:48:41.622555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:86864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x183800 00:20:16.574 [2024-11-07 10:48:41.622564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:16.574 [2024-11-07 10:48:41.622575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:87304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.574 [2024-11-07 10:48:41.622585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:16.574 [2024-11-07 10:48:41.622597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:86896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004300000 len:0x1000 key:0x183800 00:20:16.574 [2024-11-07 10:48:41.622606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:16.574 [2024-11-07 10:48:41.622617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.574 [2024-11-07 10:48:41.622626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:16.574 [2024-11-07 10:48:41.622637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:87336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.575 [2024-11-07 10:48:41.622647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:16.575 [2024-11-07 10:48:41.622658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:87344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.575 [2024-11-07 10:48:41.622667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:16.575 [2024-11-07 10:48:41.622796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:87360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.575 [2024-11-07 10:48:41.622808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:16.575 [2024-11-07 10:48:41.622819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:86976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ec000 len:0x1000 key:0x183800 00:20:16.575 [2024-11-07 10:48:41.622829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:16.575 [2024-11-07 10:48:41.622843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:86992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004306000 len:0x1000 key:0x183800 00:20:16.575 [2024-11-07 10:48:41.622852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:16.575 [2024-11-07 10:48:41.622864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:87368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.575 [2024-11-07 10:48:41.622873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:16.575 [2024-11-07 10:48:41.622884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:87384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.575 [2024-11-07 10:48:41.622893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:16.575 [2024-11-07 10:48:41.622904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:87400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.575 [2024-11-07 10:48:41.622913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:16.575 [2024-11-07 10:48:41.622927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:86776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e0000 len:0x1000 key:0x183800 00:20:16.575 [2024-11-07 10:48:41.622936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:16.575 [2024-11-07 10:48:41.622948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:86800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x183800 00:20:16.575 [2024-11-07 10:48:41.622957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:16.575 [2024-11-07 10:48:41.622969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:86832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x183800 00:20:16.575 [2024-11-07 10:48:41.622978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:16.575 [2024-11-07 10:48:41.622990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:87408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.575 [2024-11-07 10:48:41.622999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:16.575 [2024-11-07 10:48:41.623011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:87416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.575 [2024-11-07 10:48:41.623020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:16.575 [2024-11-07 10:48:41.623032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x183800 00:20:16.575 [2024-11-07 10:48:41.623041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:16.575 [2024-11-07 10:48:41.623053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:86912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x183800 00:20:16.575 [2024-11-07 10:48:41.623062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:16.575 [2024-11-07 10:48:41.623074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:86928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x183800 00:20:16.575 [2024-11-07 10:48:41.623083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:16.575 [2024-11-07 10:48:41.623096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:87440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.575 [2024-11-07 10:48:41.623105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:16.575 [2024-11-07 10:48:41.623117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:86952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004332000 len:0x1000 key:0x183800 00:20:16.575 [2024-11-07 10:48:41.623126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:16.575 [2024-11-07 10:48:41.623138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:87456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.575 [2024-11-07 10:48:41.623147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:16.575 [2024-11-07 10:48:41.623158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:87472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.575 [2024-11-07 10:48:41.623167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:16.575 [2024-11-07 10:48:41.623179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:87488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.575 [2024-11-07 10:48:41.623188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:16.575 [2024-11-07 10:48:41.623200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:87032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ea000 len:0x1000 key:0x183800 00:20:16.575 [2024-11-07 10:48:41.623209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:16.575 [2024-11-07 10:48:41.623221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:87056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b6000 len:0x1000 key:0x183800 00:20:16.575 [2024-11-07 10:48:41.623230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:16.575 [2024-11-07 10:48:41.623241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:87504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.575 [2024-11-07 10:48:41.623250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:16.575 [2024-11-07 10:48:41.623261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:87512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.575 [2024-11-07 10:48:41.623270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:16.575 [2024-11-07 10:48:41.623282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:87040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x183800 00:20:16.575 [2024-11-07 10:48:41.623292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:16.575 [2024-11-07 10:48:41.623303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:87520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.575 [2024-11-07 10:48:41.623312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:16.575 15989.23 IOPS, 62.46 MiB/s [2024-11-07T09:48:44.246Z] 16096.52 IOPS, 62.88 MiB/s [2024-11-07T09:48:44.246Z] Received shutdown signal, test time was about 27.619484 seconds 00:20:16.575 00:20:16.575 Latency(us) 00:20:16.575 [2024-11-07T09:48:44.246Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:16.575 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:16.575 Verification LBA range: start 0x0 length 0x4000 00:20:16.575 Nvme0n1 : 27.62 16155.01 63.11 0.00 0.00 7902.87 58.57 3019898.88 00:20:16.575 [2024-11-07T09:48:44.246Z] =================================================================================================================== 00:20:16.575 [2024-11-07T09:48:44.246Z] Total : 16155.01 63.11 0.00 0.00 7902.87 58.57 3019898.88 00:20:16.575 10:48:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:16.834 10:48:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:20:16.834 10:48:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:16.834 10:48:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:20:16.835 10:48:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:16.835 10:48:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:20:16.835 10:48:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:20:16.835 10:48:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:20:16.835 10:48:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:20:16.835 10:48:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:16.835 10:48:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:20:16.835 rmmod nvme_rdma 00:20:16.835 rmmod nvme_fabrics 00:20:16.835 10:48:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:16.835 10:48:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:20:16.835 10:48:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:20:16.835 10:48:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 3844717 ']' 00:20:16.835 10:48:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 3844717 00:20:16.835 10:48:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 3844717 ']' 00:20:16.835 10:48:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 3844717 00:20:16.835 10:48:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:20:16.835 10:48:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:16.835 10:48:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3844717 00:20:16.835 10:48:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:16.835 10:48:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:16.835 10:48:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3844717' 00:20:16.835 killing process with pid 3844717 00:20:16.835 10:48:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 3844717 00:20:16.835 10:48:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 3844717 00:20:17.093 10:48:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:17.093 10:48:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:20:17.093 00:20:17.093 real 0m37.944s 00:20:17.093 user 1m47.566s 00:20:17.093 sys 0m9.183s 00:20:17.093 10:48:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:17.093 10:48:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:17.093 ************************************ 00:20:17.094 END TEST nvmf_host_multipath_status 00:20:17.094 ************************************ 00:20:17.094 10:48:44 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:20:17.094 10:48:44 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:17.094 10:48:44 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:17.094 10:48:44 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.094 ************************************ 00:20:17.094 START TEST nvmf_discovery_remove_ifc 00:20:17.094 ************************************ 00:20:17.094 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:20:17.094 * Looking for test storage... 00:20:17.094 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:17.094 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:17.094 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:20:17.094 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:17.353 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:17.353 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:17.353 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:17.353 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:17.353 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:20:17.353 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:20:17.353 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:20:17.353 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:20:17.353 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:20:17.353 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:20:17.353 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:20:17.353 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:17.353 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:20:17.353 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:20:17.353 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:17.353 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:17.353 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:20:17.353 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:20:17.353 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:17.353 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:20:17.353 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:20:17.353 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:20:17.353 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:20:17.353 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:17.353 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:20:17.353 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:20:17.353 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:17.353 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:17.353 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:20:17.353 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:17.353 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:17.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.353 --rc genhtml_branch_coverage=1 00:20:17.353 --rc genhtml_function_coverage=1 00:20:17.353 --rc genhtml_legend=1 00:20:17.353 --rc geninfo_all_blocks=1 00:20:17.353 --rc geninfo_unexecuted_blocks=1 00:20:17.353 00:20:17.353 ' 00:20:17.353 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:17.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.353 --rc genhtml_branch_coverage=1 00:20:17.353 --rc genhtml_function_coverage=1 00:20:17.353 --rc genhtml_legend=1 00:20:17.353 --rc geninfo_all_blocks=1 00:20:17.353 --rc geninfo_unexecuted_blocks=1 00:20:17.353 00:20:17.353 ' 00:20:17.353 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:17.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.353 --rc genhtml_branch_coverage=1 00:20:17.353 --rc genhtml_function_coverage=1 00:20:17.353 --rc genhtml_legend=1 00:20:17.353 --rc geninfo_all_blocks=1 00:20:17.353 --rc geninfo_unexecuted_blocks=1 00:20:17.353 00:20:17.353 ' 00:20:17.353 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:17.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.354 --rc genhtml_branch_coverage=1 00:20:17.354 --rc genhtml_function_coverage=1 00:20:17.354 --rc genhtml_legend=1 00:20:17.354 --rc geninfo_all_blocks=1 00:20:17.354 --rc geninfo_unexecuted_blocks=1 00:20:17.354 00:20:17.354 ' 00:20:17.354 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:17.354 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:20:17.354 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:17.354 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:17.354 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:17.354 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:17.354 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:17.354 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:17.354 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:17.354 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:17.354 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:17.354 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:17.354 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:17.354 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:20:17.354 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:17.354 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:17.354 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:17.354 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:17.354 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:17.354 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:20:17.354 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:17.354 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:17.354 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:17.354 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.354 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.354 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.354 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:20:17.354 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.354 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:20:17.354 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:17.354 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:17.354 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:17.354 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:17.354 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:17.354 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:17.354 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:17.354 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:17.354 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:17.354 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:17.354 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:20:17.354 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:20:17.354 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:20:17.354 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:20:17.354 00:20:17.354 real 0m0.216s 00:20:17.354 user 0m0.122s 00:20:17.354 sys 0m0.112s 00:20:17.354 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:17.354 10:48:44 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:17.354 ************************************ 00:20:17.354 END TEST nvmf_discovery_remove_ifc 00:20:17.354 ************************************ 00:20:17.354 10:48:44 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:20:17.354 10:48:44 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:17.354 10:48:44 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:17.354 10:48:44 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.354 ************************************ 00:20:17.354 START TEST nvmf_identify_kernel_target 00:20:17.354 ************************************ 00:20:17.354 10:48:44 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:20:17.354 * Looking for test storage... 00:20:17.354 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:17.354 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:17.354 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:20:17.354 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:17.613 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:17.613 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:17.613 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:17.613 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:17.613 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:20:17.613 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:20:17.613 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:20:17.613 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:20:17.613 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:20:17.613 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:20:17.613 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:20:17.613 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:17.613 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:20:17.613 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:20:17.613 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:17.613 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:17.613 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:20:17.613 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:20:17.613 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:17.613 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:20:17.613 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:20:17.613 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:20:17.613 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:20:17.613 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:17.613 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:20:17.613 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:20:17.613 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:17.613 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:17.613 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:20:17.613 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:17.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.614 --rc genhtml_branch_coverage=1 00:20:17.614 --rc genhtml_function_coverage=1 00:20:17.614 --rc genhtml_legend=1 00:20:17.614 --rc geninfo_all_blocks=1 00:20:17.614 --rc geninfo_unexecuted_blocks=1 00:20:17.614 00:20:17.614 ' 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:17.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.614 --rc genhtml_branch_coverage=1 00:20:17.614 --rc genhtml_function_coverage=1 00:20:17.614 --rc genhtml_legend=1 00:20:17.614 --rc geninfo_all_blocks=1 00:20:17.614 --rc geninfo_unexecuted_blocks=1 00:20:17.614 00:20:17.614 ' 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:17.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.614 --rc genhtml_branch_coverage=1 00:20:17.614 --rc genhtml_function_coverage=1 00:20:17.614 --rc genhtml_legend=1 00:20:17.614 --rc geninfo_all_blocks=1 00:20:17.614 --rc geninfo_unexecuted_blocks=1 00:20:17.614 00:20:17.614 ' 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:17.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.614 --rc genhtml_branch_coverage=1 00:20:17.614 --rc genhtml_function_coverage=1 00:20:17.614 --rc genhtml_legend=1 00:20:17.614 --rc geninfo_all_blocks=1 00:20:17.614 --rc geninfo_unexecuted_blocks=1 00:20:17.614 00:20:17.614 ' 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:17.614 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:20:17.614 10:48:45 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:24.179 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:24.179 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:24.179 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:24.179 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # rdma_device_init 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # uname 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:24.179 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:20:24.180 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:24.180 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:24.180 altname enp217s0f0np0 00:20:24.180 altname ens818f0np0 00:20:24.180 inet 192.168.100.8/24 scope global mlx_0_0 00:20:24.180 valid_lft forever preferred_lft forever 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:20:24.180 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:24.180 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:24.180 altname enp217s0f1np1 00:20:24.180 altname ens818f1np1 00:20:24.180 inet 192.168.100.9/24 scope global mlx_0_1 00:20:24.180 valid_lft forever preferred_lft forever 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:20:24.180 192.168.100.9' 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:20:24.180 192.168.100.9' 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # head -n 1 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:20:24.180 192.168.100.9' 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # tail -n +2 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # head -n 1 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:24.180 10:48:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:20:26.714 Waiting for block devices as requested 00:20:26.714 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:20:26.714 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:20:26.714 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:20:26.714 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:20:26.973 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:20:26.973 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:20:26.973 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:20:26.973 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:20:27.231 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:20:27.231 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:20:27.231 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:20:27.490 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:20:27.490 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:20:27.490 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:20:27.749 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:20:27.749 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:20:27.749 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:20:28.008 10:48:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:28.008 10:48:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:28.008 10:48:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:20:28.008 10:48:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:20:28.008 10:48:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:28.008 10:48:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:28.008 10:48:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:20:28.008 10:48:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:20:28.008 10:48:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:20:28.008 No valid GPT data, bailing 00:20:28.008 10:48:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:28.008 10:48:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:28.008 10:48:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:28.008 10:48:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:20:28.008 10:48:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:20:28.008 10:48:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:28.008 10:48:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:28.008 10:48:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:28.008 10:48:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:20:28.008 10:48:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:20:28.008 10:48:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:20:28.008 10:48:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:20:28.008 10:48:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 192.168.100.8 00:20:28.008 10:48:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo rdma 00:20:28.008 10:48:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:20:28.008 10:48:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:20:28.008 10:48:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:28.008 10:48:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -t rdma -s 4420 00:20:28.267 00:20:28.267 Discovery Log Number of Records 2, Generation counter 2 00:20:28.267 =====Discovery Log Entry 0====== 00:20:28.267 trtype: rdma 00:20:28.267 adrfam: ipv4 00:20:28.267 subtype: current discovery subsystem 00:20:28.267 treq: not specified, sq flow control disable supported 00:20:28.267 portid: 1 00:20:28.267 trsvcid: 4420 00:20:28.267 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:28.267 traddr: 192.168.100.8 00:20:28.267 eflags: none 00:20:28.267 rdma_prtype: not specified 00:20:28.267 rdma_qptype: connected 00:20:28.267 rdma_cms: rdma-cm 00:20:28.267 rdma_pkey: 0x0000 00:20:28.267 =====Discovery Log Entry 1====== 00:20:28.267 trtype: rdma 00:20:28.267 adrfam: ipv4 00:20:28.267 subtype: nvme subsystem 00:20:28.267 treq: not specified, sq flow control disable supported 00:20:28.267 portid: 1 00:20:28.267 trsvcid: 4420 00:20:28.267 subnqn: nqn.2016-06.io.spdk:testnqn 00:20:28.267 traddr: 192.168.100.8 00:20:28.267 eflags: none 00:20:28.267 rdma_prtype: not specified 00:20:28.267 rdma_qptype: connected 00:20:28.267 rdma_cms: rdma-cm 00:20:28.267 rdma_pkey: 0x0000 00:20:28.267 10:48:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:20:28.267 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:20:28.527 ===================================================== 00:20:28.527 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:28.527 ===================================================== 00:20:28.527 Controller Capabilities/Features 00:20:28.527 ================================ 00:20:28.527 Vendor ID: 0000 00:20:28.527 Subsystem Vendor ID: 0000 00:20:28.527 Serial Number: c8461a453a2e70549c80 00:20:28.527 Model Number: Linux 00:20:28.527 Firmware Version: 6.8.9-20 00:20:28.527 Recommended Arb Burst: 0 00:20:28.527 IEEE OUI Identifier: 00 00 00 00:20:28.527 Multi-path I/O 00:20:28.527 May have multiple subsystem ports: No 00:20:28.527 May have multiple controllers: No 00:20:28.527 Associated with SR-IOV VF: No 00:20:28.527 Max Data Transfer Size: Unlimited 00:20:28.527 Max Number of Namespaces: 0 00:20:28.527 Max Number of I/O Queues: 1024 00:20:28.527 NVMe Specification Version (VS): 1.3 00:20:28.527 NVMe Specification Version (Identify): 1.3 00:20:28.527 Maximum Queue Entries: 128 00:20:28.527 Contiguous Queues Required: No 00:20:28.527 Arbitration Mechanisms Supported 00:20:28.527 Weighted Round Robin: Not Supported 00:20:28.527 Vendor Specific: Not Supported 00:20:28.527 Reset Timeout: 7500 ms 00:20:28.527 Doorbell Stride: 4 bytes 00:20:28.527 NVM Subsystem Reset: Not Supported 00:20:28.527 Command Sets Supported 00:20:28.527 NVM Command Set: Supported 00:20:28.527 Boot Partition: Not Supported 00:20:28.527 Memory Page Size Minimum: 4096 bytes 00:20:28.527 Memory Page Size Maximum: 4096 bytes 00:20:28.527 Persistent Memory Region: Not Supported 00:20:28.527 Optional Asynchronous Events Supported 00:20:28.527 Namespace Attribute Notices: Not Supported 00:20:28.527 Firmware Activation Notices: Not Supported 00:20:28.527 ANA Change Notices: Not Supported 00:20:28.527 PLE Aggregate Log Change Notices: Not Supported 00:20:28.527 LBA Status Info Alert Notices: Not Supported 00:20:28.527 EGE Aggregate Log Change Notices: Not Supported 00:20:28.527 Normal NVM Subsystem Shutdown event: Not Supported 00:20:28.527 Zone Descriptor Change Notices: Not Supported 00:20:28.527 Discovery Log Change Notices: Supported 00:20:28.527 Controller Attributes 00:20:28.527 128-bit Host Identifier: Not Supported 00:20:28.527 Non-Operational Permissive Mode: Not Supported 00:20:28.527 NVM Sets: Not Supported 00:20:28.527 Read Recovery Levels: Not Supported 00:20:28.527 Endurance Groups: Not Supported 00:20:28.527 Predictable Latency Mode: Not Supported 00:20:28.527 Traffic Based Keep ALive: Not Supported 00:20:28.527 Namespace Granularity: Not Supported 00:20:28.527 SQ Associations: Not Supported 00:20:28.527 UUID List: Not Supported 00:20:28.527 Multi-Domain Subsystem: Not Supported 00:20:28.527 Fixed Capacity Management: Not Supported 00:20:28.527 Variable Capacity Management: Not Supported 00:20:28.527 Delete Endurance Group: Not Supported 00:20:28.527 Delete NVM Set: Not Supported 00:20:28.527 Extended LBA Formats Supported: Not Supported 00:20:28.527 Flexible Data Placement Supported: Not Supported 00:20:28.527 00:20:28.527 Controller Memory Buffer Support 00:20:28.527 ================================ 00:20:28.527 Supported: No 00:20:28.527 00:20:28.527 Persistent Memory Region Support 00:20:28.527 ================================ 00:20:28.527 Supported: No 00:20:28.527 00:20:28.527 Admin Command Set Attributes 00:20:28.527 ============================ 00:20:28.527 Security Send/Receive: Not Supported 00:20:28.527 Format NVM: Not Supported 00:20:28.527 Firmware Activate/Download: Not Supported 00:20:28.527 Namespace Management: Not Supported 00:20:28.527 Device Self-Test: Not Supported 00:20:28.527 Directives: Not Supported 00:20:28.527 NVMe-MI: Not Supported 00:20:28.527 Virtualization Management: Not Supported 00:20:28.527 Doorbell Buffer Config: Not Supported 00:20:28.527 Get LBA Status Capability: Not Supported 00:20:28.527 Command & Feature Lockdown Capability: Not Supported 00:20:28.527 Abort Command Limit: 1 00:20:28.527 Async Event Request Limit: 1 00:20:28.527 Number of Firmware Slots: N/A 00:20:28.527 Firmware Slot 1 Read-Only: N/A 00:20:28.527 Firmware Activation Without Reset: N/A 00:20:28.527 Multiple Update Detection Support: N/A 00:20:28.527 Firmware Update Granularity: No Information Provided 00:20:28.527 Per-Namespace SMART Log: No 00:20:28.527 Asymmetric Namespace Access Log Page: Not Supported 00:20:28.527 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:28.528 Command Effects Log Page: Not Supported 00:20:28.528 Get Log Page Extended Data: Supported 00:20:28.528 Telemetry Log Pages: Not Supported 00:20:28.528 Persistent Event Log Pages: Not Supported 00:20:28.528 Supported Log Pages Log Page: May Support 00:20:28.528 Commands Supported & Effects Log Page: Not Supported 00:20:28.528 Feature Identifiers & Effects Log Page:May Support 00:20:28.528 NVMe-MI Commands & Effects Log Page: May Support 00:20:28.528 Data Area 4 for Telemetry Log: Not Supported 00:20:28.528 Error Log Page Entries Supported: 1 00:20:28.528 Keep Alive: Not Supported 00:20:28.528 00:20:28.528 NVM Command Set Attributes 00:20:28.528 ========================== 00:20:28.528 Submission Queue Entry Size 00:20:28.528 Max: 1 00:20:28.528 Min: 1 00:20:28.528 Completion Queue Entry Size 00:20:28.528 Max: 1 00:20:28.528 Min: 1 00:20:28.528 Number of Namespaces: 0 00:20:28.528 Compare Command: Not Supported 00:20:28.528 Write Uncorrectable Command: Not Supported 00:20:28.528 Dataset Management Command: Not Supported 00:20:28.528 Write Zeroes Command: Not Supported 00:20:28.528 Set Features Save Field: Not Supported 00:20:28.528 Reservations: Not Supported 00:20:28.528 Timestamp: Not Supported 00:20:28.528 Copy: Not Supported 00:20:28.528 Volatile Write Cache: Not Present 00:20:28.528 Atomic Write Unit (Normal): 1 00:20:28.528 Atomic Write Unit (PFail): 1 00:20:28.528 Atomic Compare & Write Unit: 1 00:20:28.528 Fused Compare & Write: Not Supported 00:20:28.528 Scatter-Gather List 00:20:28.528 SGL Command Set: Supported 00:20:28.528 SGL Keyed: Supported 00:20:28.528 SGL Bit Bucket Descriptor: Not Supported 00:20:28.528 SGL Metadata Pointer: Not Supported 00:20:28.528 Oversized SGL: Not Supported 00:20:28.528 SGL Metadata Address: Not Supported 00:20:28.528 SGL Offset: Supported 00:20:28.528 Transport SGL Data Block: Not Supported 00:20:28.528 Replay Protected Memory Block: Not Supported 00:20:28.528 00:20:28.528 Firmware Slot Information 00:20:28.528 ========================= 00:20:28.528 Active slot: 0 00:20:28.528 00:20:28.528 00:20:28.528 Error Log 00:20:28.528 ========= 00:20:28.528 00:20:28.528 Active Namespaces 00:20:28.528 ================= 00:20:28.528 Discovery Log Page 00:20:28.528 ================== 00:20:28.528 Generation Counter: 2 00:20:28.528 Number of Records: 2 00:20:28.528 Record Format: 0 00:20:28.528 00:20:28.528 Discovery Log Entry 0 00:20:28.528 ---------------------- 00:20:28.528 Transport Type: 1 (RDMA) 00:20:28.528 Address Family: 1 (IPv4) 00:20:28.528 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:28.528 Entry Flags: 00:20:28.528 Duplicate Returned Information: 0 00:20:28.528 Explicit Persistent Connection Support for Discovery: 0 00:20:28.528 Transport Requirements: 00:20:28.528 Secure Channel: Not Specified 00:20:28.528 Port ID: 1 (0x0001) 00:20:28.528 Controller ID: 65535 (0xffff) 00:20:28.528 Admin Max SQ Size: 32 00:20:28.528 Transport Service Identifier: 4420 00:20:28.528 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:28.528 Transport Address: 192.168.100.8 00:20:28.528 Transport Specific Address Subtype - RDMA 00:20:28.528 RDMA QP Service Type: 1 (Reliable Connected) 00:20:28.528 RDMA Provider Type: 1 (No provider specified) 00:20:28.528 RDMA CM Service: 1 (RDMA_CM) 00:20:28.528 Discovery Log Entry 1 00:20:28.528 ---------------------- 00:20:28.528 Transport Type: 1 (RDMA) 00:20:28.528 Address Family: 1 (IPv4) 00:20:28.528 Subsystem Type: 2 (NVM Subsystem) 00:20:28.528 Entry Flags: 00:20:28.528 Duplicate Returned Information: 0 00:20:28.528 Explicit Persistent Connection Support for Discovery: 0 00:20:28.528 Transport Requirements: 00:20:28.528 Secure Channel: Not Specified 00:20:28.528 Port ID: 1 (0x0001) 00:20:28.528 Controller ID: 65535 (0xffff) 00:20:28.528 Admin Max SQ Size: 32 00:20:28.528 Transport Service Identifier: 4420 00:20:28.528 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:20:28.528 Transport Address: 192.168.100.8 00:20:28.528 Transport Specific Address Subtype - RDMA 00:20:28.528 RDMA QP Service Type: 1 (Reliable Connected) 00:20:28.528 RDMA Provider Type: 1 (No provider specified) 00:20:28.528 RDMA CM Service: 1 (RDMA_CM) 00:20:28.528 10:48:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:28.528 get_feature(0x01) failed 00:20:28.528 get_feature(0x02) failed 00:20:28.528 get_feature(0x04) failed 00:20:28.528 ===================================================== 00:20:28.528 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:20:28.528 ===================================================== 00:20:28.528 Controller Capabilities/Features 00:20:28.528 ================================ 00:20:28.528 Vendor ID: 0000 00:20:28.528 Subsystem Vendor ID: 0000 00:20:28.528 Serial Number: 88ba45e6c63cb97a7064 00:20:28.528 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:20:28.528 Firmware Version: 6.8.9-20 00:20:28.528 Recommended Arb Burst: 6 00:20:28.528 IEEE OUI Identifier: 00 00 00 00:20:28.528 Multi-path I/O 00:20:28.528 May have multiple subsystem ports: Yes 00:20:28.528 May have multiple controllers: Yes 00:20:28.528 Associated with SR-IOV VF: No 00:20:28.528 Max Data Transfer Size: 1048576 00:20:28.528 Max Number of Namespaces: 1024 00:20:28.528 Max Number of I/O Queues: 128 00:20:28.528 NVMe Specification Version (VS): 1.3 00:20:28.528 NVMe Specification Version (Identify): 1.3 00:20:28.528 Maximum Queue Entries: 128 00:20:28.528 Contiguous Queues Required: No 00:20:28.528 Arbitration Mechanisms Supported 00:20:28.528 Weighted Round Robin: Not Supported 00:20:28.528 Vendor Specific: Not Supported 00:20:28.528 Reset Timeout: 7500 ms 00:20:28.528 Doorbell Stride: 4 bytes 00:20:28.528 NVM Subsystem Reset: Not Supported 00:20:28.528 Command Sets Supported 00:20:28.528 NVM Command Set: Supported 00:20:28.528 Boot Partition: Not Supported 00:20:28.528 Memory Page Size Minimum: 4096 bytes 00:20:28.528 Memory Page Size Maximum: 4096 bytes 00:20:28.528 Persistent Memory Region: Not Supported 00:20:28.528 Optional Asynchronous Events Supported 00:20:28.528 Namespace Attribute Notices: Supported 00:20:28.528 Firmware Activation Notices: Not Supported 00:20:28.528 ANA Change Notices: Supported 00:20:28.528 PLE Aggregate Log Change Notices: Not Supported 00:20:28.528 LBA Status Info Alert Notices: Not Supported 00:20:28.528 EGE Aggregate Log Change Notices: Not Supported 00:20:28.528 Normal NVM Subsystem Shutdown event: Not Supported 00:20:28.528 Zone Descriptor Change Notices: Not Supported 00:20:28.528 Discovery Log Change Notices: Not Supported 00:20:28.528 Controller Attributes 00:20:28.528 128-bit Host Identifier: Supported 00:20:28.528 Non-Operational Permissive Mode: Not Supported 00:20:28.528 NVM Sets: Not Supported 00:20:28.528 Read Recovery Levels: Not Supported 00:20:28.528 Endurance Groups: Not Supported 00:20:28.528 Predictable Latency Mode: Not Supported 00:20:28.528 Traffic Based Keep ALive: Supported 00:20:28.528 Namespace Granularity: Not Supported 00:20:28.528 SQ Associations: Not Supported 00:20:28.528 UUID List: Not Supported 00:20:28.528 Multi-Domain Subsystem: Not Supported 00:20:28.528 Fixed Capacity Management: Not Supported 00:20:28.528 Variable Capacity Management: Not Supported 00:20:28.528 Delete Endurance Group: Not Supported 00:20:28.528 Delete NVM Set: Not Supported 00:20:28.528 Extended LBA Formats Supported: Not Supported 00:20:28.528 Flexible Data Placement Supported: Not Supported 00:20:28.528 00:20:28.528 Controller Memory Buffer Support 00:20:28.528 ================================ 00:20:28.528 Supported: No 00:20:28.528 00:20:28.528 Persistent Memory Region Support 00:20:28.528 ================================ 00:20:28.528 Supported: No 00:20:28.528 00:20:28.528 Admin Command Set Attributes 00:20:28.528 ============================ 00:20:28.528 Security Send/Receive: Not Supported 00:20:28.528 Format NVM: Not Supported 00:20:28.528 Firmware Activate/Download: Not Supported 00:20:28.528 Namespace Management: Not Supported 00:20:28.528 Device Self-Test: Not Supported 00:20:28.528 Directives: Not Supported 00:20:28.528 NVMe-MI: Not Supported 00:20:28.528 Virtualization Management: Not Supported 00:20:28.529 Doorbell Buffer Config: Not Supported 00:20:28.529 Get LBA Status Capability: Not Supported 00:20:28.529 Command & Feature Lockdown Capability: Not Supported 00:20:28.529 Abort Command Limit: 4 00:20:28.529 Async Event Request Limit: 4 00:20:28.529 Number of Firmware Slots: N/A 00:20:28.529 Firmware Slot 1 Read-Only: N/A 00:20:28.529 Firmware Activation Without Reset: N/A 00:20:28.529 Multiple Update Detection Support: N/A 00:20:28.529 Firmware Update Granularity: No Information Provided 00:20:28.529 Per-Namespace SMART Log: Yes 00:20:28.529 Asymmetric Namespace Access Log Page: Supported 00:20:28.529 ANA Transition Time : 10 sec 00:20:28.529 00:20:28.529 Asymmetric Namespace Access Capabilities 00:20:28.529 ANA Optimized State : Supported 00:20:28.529 ANA Non-Optimized State : Supported 00:20:28.529 ANA Inaccessible State : Supported 00:20:28.529 ANA Persistent Loss State : Supported 00:20:28.529 ANA Change State : Supported 00:20:28.529 ANAGRPID is not changed : No 00:20:28.529 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:20:28.529 00:20:28.529 ANA Group Identifier Maximum : 128 00:20:28.529 Number of ANA Group Identifiers : 128 00:20:28.529 Max Number of Allowed Namespaces : 1024 00:20:28.529 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:20:28.529 Command Effects Log Page: Supported 00:20:28.529 Get Log Page Extended Data: Supported 00:20:28.529 Telemetry Log Pages: Not Supported 00:20:28.529 Persistent Event Log Pages: Not Supported 00:20:28.529 Supported Log Pages Log Page: May Support 00:20:28.529 Commands Supported & Effects Log Page: Not Supported 00:20:28.529 Feature Identifiers & Effects Log Page:May Support 00:20:28.529 NVMe-MI Commands & Effects Log Page: May Support 00:20:28.529 Data Area 4 for Telemetry Log: Not Supported 00:20:28.529 Error Log Page Entries Supported: 128 00:20:28.529 Keep Alive: Supported 00:20:28.529 Keep Alive Granularity: 1000 ms 00:20:28.529 00:20:28.529 NVM Command Set Attributes 00:20:28.529 ========================== 00:20:28.529 Submission Queue Entry Size 00:20:28.529 Max: 64 00:20:28.529 Min: 64 00:20:28.529 Completion Queue Entry Size 00:20:28.529 Max: 16 00:20:28.529 Min: 16 00:20:28.529 Number of Namespaces: 1024 00:20:28.529 Compare Command: Not Supported 00:20:28.529 Write Uncorrectable Command: Not Supported 00:20:28.529 Dataset Management Command: Supported 00:20:28.529 Write Zeroes Command: Supported 00:20:28.529 Set Features Save Field: Not Supported 00:20:28.529 Reservations: Not Supported 00:20:28.529 Timestamp: Not Supported 00:20:28.529 Copy: Not Supported 00:20:28.529 Volatile Write Cache: Present 00:20:28.529 Atomic Write Unit (Normal): 1 00:20:28.529 Atomic Write Unit (PFail): 1 00:20:28.529 Atomic Compare & Write Unit: 1 00:20:28.529 Fused Compare & Write: Not Supported 00:20:28.529 Scatter-Gather List 00:20:28.529 SGL Command Set: Supported 00:20:28.529 SGL Keyed: Supported 00:20:28.529 SGL Bit Bucket Descriptor: Not Supported 00:20:28.529 SGL Metadata Pointer: Not Supported 00:20:28.529 Oversized SGL: Not Supported 00:20:28.529 SGL Metadata Address: Not Supported 00:20:28.529 SGL Offset: Supported 00:20:28.529 Transport SGL Data Block: Not Supported 00:20:28.529 Replay Protected Memory Block: Not Supported 00:20:28.529 00:20:28.529 Firmware Slot Information 00:20:28.529 ========================= 00:20:28.529 Active slot: 0 00:20:28.529 00:20:28.529 Asymmetric Namespace Access 00:20:28.529 =========================== 00:20:28.529 Change Count : 0 00:20:28.529 Number of ANA Group Descriptors : 1 00:20:28.529 ANA Group Descriptor : 0 00:20:28.529 ANA Group ID : 1 00:20:28.529 Number of NSID Values : 1 00:20:28.529 Change Count : 0 00:20:28.529 ANA State : 1 00:20:28.529 Namespace Identifier : 1 00:20:28.529 00:20:28.529 Commands Supported and Effects 00:20:28.529 ============================== 00:20:28.529 Admin Commands 00:20:28.529 -------------- 00:20:28.529 Get Log Page (02h): Supported 00:20:28.529 Identify (06h): Supported 00:20:28.529 Abort (08h): Supported 00:20:28.529 Set Features (09h): Supported 00:20:28.529 Get Features (0Ah): Supported 00:20:28.529 Asynchronous Event Request (0Ch): Supported 00:20:28.529 Keep Alive (18h): Supported 00:20:28.529 I/O Commands 00:20:28.529 ------------ 00:20:28.529 Flush (00h): Supported 00:20:28.529 Write (01h): Supported LBA-Change 00:20:28.529 Read (02h): Supported 00:20:28.529 Write Zeroes (08h): Supported LBA-Change 00:20:28.529 Dataset Management (09h): Supported 00:20:28.529 00:20:28.529 Error Log 00:20:28.529 ========= 00:20:28.529 Entry: 0 00:20:28.529 Error Count: 0x3 00:20:28.529 Submission Queue Id: 0x0 00:20:28.529 Command Id: 0x5 00:20:28.529 Phase Bit: 0 00:20:28.529 Status Code: 0x2 00:20:28.529 Status Code Type: 0x0 00:20:28.529 Do Not Retry: 1 00:20:28.529 Error Location: 0x28 00:20:28.529 LBA: 0x0 00:20:28.529 Namespace: 0x0 00:20:28.529 Vendor Log Page: 0x0 00:20:28.529 ----------- 00:20:28.529 Entry: 1 00:20:28.529 Error Count: 0x2 00:20:28.529 Submission Queue Id: 0x0 00:20:28.529 Command Id: 0x5 00:20:28.529 Phase Bit: 0 00:20:28.529 Status Code: 0x2 00:20:28.529 Status Code Type: 0x0 00:20:28.529 Do Not Retry: 1 00:20:28.529 Error Location: 0x28 00:20:28.529 LBA: 0x0 00:20:28.529 Namespace: 0x0 00:20:28.529 Vendor Log Page: 0x0 00:20:28.529 ----------- 00:20:28.529 Entry: 2 00:20:28.529 Error Count: 0x1 00:20:28.529 Submission Queue Id: 0x0 00:20:28.529 Command Id: 0x0 00:20:28.529 Phase Bit: 0 00:20:28.529 Status Code: 0x2 00:20:28.529 Status Code Type: 0x0 00:20:28.529 Do Not Retry: 1 00:20:28.529 Error Location: 0x28 00:20:28.529 LBA: 0x0 00:20:28.529 Namespace: 0x0 00:20:28.529 Vendor Log Page: 0x0 00:20:28.529 00:20:28.529 Number of Queues 00:20:28.529 ================ 00:20:28.529 Number of I/O Submission Queues: 128 00:20:28.529 Number of I/O Completion Queues: 128 00:20:28.529 00:20:28.529 ZNS Specific Controller Data 00:20:28.529 ============================ 00:20:28.529 Zone Append Size Limit: 0 00:20:28.529 00:20:28.529 00:20:28.529 Active Namespaces 00:20:28.529 ================= 00:20:28.529 get_feature(0x05) failed 00:20:28.529 Namespace ID:1 00:20:28.529 Command Set Identifier: NVM (00h) 00:20:28.530 Deallocate: Supported 00:20:28.530 Deallocated/Unwritten Error: Not Supported 00:20:28.530 Deallocated Read Value: Unknown 00:20:28.530 Deallocate in Write Zeroes: Not Supported 00:20:28.530 Deallocated Guard Field: 0xFFFF 00:20:28.530 Flush: Supported 00:20:28.530 Reservation: Not Supported 00:20:28.530 Namespace Sharing Capabilities: Multiple Controllers 00:20:28.530 Size (in LBAs): 3907029168 (1863GiB) 00:20:28.530 Capacity (in LBAs): 3907029168 (1863GiB) 00:20:28.530 Utilization (in LBAs): 3907029168 (1863GiB) 00:20:28.530 UUID: 0092a112-c4a0-491f-8332-5f04954bae97 00:20:28.530 Thin Provisioning: Not Supported 00:20:28.530 Per-NS Atomic Units: Yes 00:20:28.530 Atomic Boundary Size (Normal): 0 00:20:28.530 Atomic Boundary Size (PFail): 0 00:20:28.530 Atomic Boundary Offset: 0 00:20:28.530 NGUID/EUI64 Never Reused: No 00:20:28.530 ANA group ID: 1 00:20:28.530 Namespace Write Protected: No 00:20:28.530 Number of LBA Formats: 1 00:20:28.530 Current LBA Format: LBA Format #00 00:20:28.530 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:28.530 00:20:28.530 10:48:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:20:28.530 10:48:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:28.530 10:48:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:20:28.530 10:48:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:20:28.530 10:48:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:20:28.530 10:48:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:20:28.530 10:48:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:28.530 10:48:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:20:28.530 rmmod nvme_rdma 00:20:28.530 rmmod nvme_fabrics 00:20:28.530 10:48:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:28.530 10:48:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:20:28.530 10:48:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:20:28.530 10:48:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:20:28.530 10:48:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:28.530 10:48:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:20:28.530 10:48:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:20:28.530 10:48:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:20:28.530 10:48:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:20:28.530 10:48:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:28.530 10:48:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:28.530 10:48:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:28.530 10:48:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:28.530 10:48:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:20:28.530 10:48:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_rdma nvmet 00:20:28.897 10:48:56 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:20:32.199 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:20:32.199 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:20:32.199 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:20:32.199 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:20:32.199 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:20:32.199 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:20:32.199 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:20:32.199 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:20:32.199 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:20:32.199 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:20:32.199 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:20:32.199 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:20:32.199 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:20:32.199 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:20:32.199 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:20:32.199 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:20:34.103 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:20:34.103 00:20:34.103 real 0m16.647s 00:20:34.103 user 0m4.243s 00:20:34.103 sys 0m9.577s 00:20:34.103 10:49:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:34.103 10:49:01 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.103 ************************************ 00:20:34.103 END TEST nvmf_identify_kernel_target 00:20:34.103 ************************************ 00:20:34.103 10:49:01 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:20:34.103 10:49:01 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:34.103 10:49:01 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:34.103 10:49:01 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.103 ************************************ 00:20:34.103 START TEST nvmf_auth_host 00:20:34.103 ************************************ 00:20:34.103 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:20:34.103 * Looking for test storage... 00:20:34.103 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:34.103 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:34.103 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:34.103 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:20:34.103 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:34.362 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:34.362 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:34.362 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:34.362 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:20:34.362 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:20:34.362 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:20:34.362 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:20:34.362 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:20:34.362 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:20:34.362 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:20:34.362 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:34.362 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:20:34.362 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:20:34.362 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:34.362 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:34.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:34.363 --rc genhtml_branch_coverage=1 00:20:34.363 --rc genhtml_function_coverage=1 00:20:34.363 --rc genhtml_legend=1 00:20:34.363 --rc geninfo_all_blocks=1 00:20:34.363 --rc geninfo_unexecuted_blocks=1 00:20:34.363 00:20:34.363 ' 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:34.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:34.363 --rc genhtml_branch_coverage=1 00:20:34.363 --rc genhtml_function_coverage=1 00:20:34.363 --rc genhtml_legend=1 00:20:34.363 --rc geninfo_all_blocks=1 00:20:34.363 --rc geninfo_unexecuted_blocks=1 00:20:34.363 00:20:34.363 ' 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:34.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:34.363 --rc genhtml_branch_coverage=1 00:20:34.363 --rc genhtml_function_coverage=1 00:20:34.363 --rc genhtml_legend=1 00:20:34.363 --rc geninfo_all_blocks=1 00:20:34.363 --rc geninfo_unexecuted_blocks=1 00:20:34.363 00:20:34.363 ' 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:34.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:34.363 --rc genhtml_branch_coverage=1 00:20:34.363 --rc genhtml_function_coverage=1 00:20:34.363 --rc genhtml_legend=1 00:20:34.363 --rc geninfo_all_blocks=1 00:20:34.363 --rc geninfo_unexecuted_blocks=1 00:20:34.363 00:20:34.363 ' 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:34.363 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:34.363 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.364 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:34.364 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:34.364 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:20:34.364 10:49:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:40.931 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:40.931 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:40.931 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:40.931 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # rdma_device_init 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # uname 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # allocate_nic_ips 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:40.931 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:20:40.932 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:40.932 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:40.932 altname enp217s0f0np0 00:20:40.932 altname ens818f0np0 00:20:40.932 inet 192.168.100.8/24 scope global mlx_0_0 00:20:40.932 valid_lft forever preferred_lft forever 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:20:40.932 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:40.932 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:40.932 altname enp217s0f1np1 00:20:40.932 altname ens818f1np1 00:20:40.932 inet 192.168.100.9/24 scope global mlx_0_1 00:20:40.932 valid_lft forever preferred_lft forever 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:20:40.932 192.168.100.9' 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:20:40.932 192.168.100.9' 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # head -n 1 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # head -n 1 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:20:40.932 192.168.100.9' 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # tail -n +2 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=3859774 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 3859774 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 3859774 ']' 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:40.932 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.191 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:41.191 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:20:41.191 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:20:41.191 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:41.191 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:41.191 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:41.191 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:20:41.191 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:20:41.191 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:41.191 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=71814c73170c5bb5b166e67c2d023970 00:20:41.191 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:20:41.191 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.FS9 00:20:41.191 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 71814c73170c5bb5b166e67c2d023970 0 00:20:41.191 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 71814c73170c5bb5b166e67c2d023970 0 00:20:41.191 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:41.191 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:41.191 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=71814c73170c5bb5b166e67c2d023970 00:20:41.191 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.FS9 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.FS9 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.FS9 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=16507ddb9a48cb58cdecdffd3fc45570b346ff7e0ebf8a6c1878eb91c0b13de9 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Zqh 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 16507ddb9a48cb58cdecdffd3fc45570b346ff7e0ebf8a6c1878eb91c0b13de9 3 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 16507ddb9a48cb58cdecdffd3fc45570b346ff7e0ebf8a6c1878eb91c0b13de9 3 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=16507ddb9a48cb58cdecdffd3fc45570b346ff7e0ebf8a6c1878eb91c0b13de9 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Zqh 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Zqh 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Zqh 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4fc4e429e252bc0de96496dbb0a7f5e0d887819ffea38d37 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.ouR 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4fc4e429e252bc0de96496dbb0a7f5e0d887819ffea38d37 0 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4fc4e429e252bc0de96496dbb0a7f5e0d887819ffea38d37 0 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4fc4e429e252bc0de96496dbb0a7f5e0d887819ffea38d37 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.ouR 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.ouR 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.ouR 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=31e0f028bca451bf8a9de0068e92a4daf58f349b894d93e8 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.4zG 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 31e0f028bca451bf8a9de0068e92a4daf58f349b894d93e8 2 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 31e0f028bca451bf8a9de0068e92a4daf58f349b894d93e8 2 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=31e0f028bca451bf8a9de0068e92a4daf58f349b894d93e8 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.4zG 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.4zG 00:20:41.192 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.4zG 00:20:41.451 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:20:41.451 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:41.451 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:41.451 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:41.451 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:20:41.451 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:20:41.451 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:41.451 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0350d48da6e27a2098a957e748409b48 00:20:41.451 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:20:41.451 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Gjg 00:20:41.451 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0350d48da6e27a2098a957e748409b48 1 00:20:41.451 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0350d48da6e27a2098a957e748409b48 1 00:20:41.451 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:41.451 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:41.451 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0350d48da6e27a2098a957e748409b48 00:20:41.451 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:20:41.451 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:41.451 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Gjg 00:20:41.451 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Gjg 00:20:41.451 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Gjg 00:20:41.451 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:20:41.451 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:41.451 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:41.451 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:41.451 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:20:41.451 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:20:41.451 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:41.451 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7cb3fb0e766b498f885d636bbfda658f 00:20:41.451 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:20:41.451 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.SxQ 00:20:41.451 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7cb3fb0e766b498f885d636bbfda658f 1 00:20:41.451 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7cb3fb0e766b498f885d636bbfda658f 1 00:20:41.451 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:41.451 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:41.451 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7cb3fb0e766b498f885d636bbfda658f 00:20:41.451 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:20:41.451 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:41.451 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.SxQ 00:20:41.451 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.SxQ 00:20:41.451 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.SxQ 00:20:41.451 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:20:41.451 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:41.451 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:41.451 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:41.452 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:20:41.452 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:20:41.452 10:49:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:41.452 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=891175c4c4693821183b2b66299dcc0d2d09a47f5c567880 00:20:41.452 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:20:41.452 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.gTN 00:20:41.452 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 891175c4c4693821183b2b66299dcc0d2d09a47f5c567880 2 00:20:41.452 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 891175c4c4693821183b2b66299dcc0d2d09a47f5c567880 2 00:20:41.452 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:41.452 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:41.452 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=891175c4c4693821183b2b66299dcc0d2d09a47f5c567880 00:20:41.452 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:20:41.452 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:41.452 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.gTN 00:20:41.452 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.gTN 00:20:41.452 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.gTN 00:20:41.452 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:20:41.452 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:41.452 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:41.452 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:41.452 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:20:41.452 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:20:41.452 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:41.452 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=03cdb5cfe7e7cad09ae112aaa8275e0c 00:20:41.452 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:20:41.452 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.3vh 00:20:41.452 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 03cdb5cfe7e7cad09ae112aaa8275e0c 0 00:20:41.452 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 03cdb5cfe7e7cad09ae112aaa8275e0c 0 00:20:41.452 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:41.452 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:41.452 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=03cdb5cfe7e7cad09ae112aaa8275e0c 00:20:41.452 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:20:41.452 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:41.452 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.3vh 00:20:41.452 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.3vh 00:20:41.452 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.3vh 00:20:41.452 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:20:41.452 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:41.452 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:41.452 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:41.452 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:20:41.452 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:20:41.452 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:41.711 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ce2b244186dd8e68480843b665d506531d821e65c797cb87698f9e4064142f89 00:20:41.711 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:20:41.711 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.y3l 00:20:41.711 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ce2b244186dd8e68480843b665d506531d821e65c797cb87698f9e4064142f89 3 00:20:41.711 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ce2b244186dd8e68480843b665d506531d821e65c797cb87698f9e4064142f89 3 00:20:41.711 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:41.711 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:41.711 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ce2b244186dd8e68480843b665d506531d821e65c797cb87698f9e4064142f89 00:20:41.711 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:20:41.711 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:41.711 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.y3l 00:20:41.711 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.y3l 00:20:41.711 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.y3l 00:20:41.711 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:20:41.711 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3859774 00:20:41.711 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 3859774 ']' 00:20:41.711 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:41.711 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:41.711 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:41.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:41.711 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:41.711 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.711 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:41.711 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:20:41.711 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:41.711 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.FS9 00:20:41.711 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.711 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.711 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.711 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Zqh ]] 00:20:41.711 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Zqh 00:20:41.711 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.711 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.711 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.711 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:41.711 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.ouR 00:20:41.711 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.711 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.970 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.970 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.4zG ]] 00:20:41.970 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4zG 00:20:41.970 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.970 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.970 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.970 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:41.970 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Gjg 00:20:41.970 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.970 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.970 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.970 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.SxQ ]] 00:20:41.970 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.SxQ 00:20:41.970 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.970 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.970 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.970 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:41.970 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.gTN 00:20:41.970 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.970 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.970 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.970 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.3vh ]] 00:20:41.970 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.3vh 00:20:41.970 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.970 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.970 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.970 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:41.970 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.y3l 00:20:41.970 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.970 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.970 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.970 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:20:41.970 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:20:41.970 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:20:41.970 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:41.970 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:41.970 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:41.970 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:41.971 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:41.971 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:41.971 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:41.971 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:41.971 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:41.971 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:41.971 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:20:41.971 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:20:41.971 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:20:41.971 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:41.971 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:41.971 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:41.971 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:20:41.971 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:20:41.971 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:20:41.971 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:41.971 10:49:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:20:45.251 Waiting for block devices as requested 00:20:45.252 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:20:45.252 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:20:45.252 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:20:45.252 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:20:45.509 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:20:45.509 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:20:45.509 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:20:45.767 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:20:45.767 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:20:45.767 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:20:46.024 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:20:46.024 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:20:46.024 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:20:46.283 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:20:46.283 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:20:46.283 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:20:46.541 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:20:47.112 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:47.112 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:47.112 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:20:47.112 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:20:47.112 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:47.112 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:47.112 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:20:47.112 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:20:47.112 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:20:47.112 No valid GPT data, bailing 00:20:47.112 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:47.112 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:47.112 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:47.112 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:20:47.112 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:20:47.112 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:47.112 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 192.168.100.8 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo rdma 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -t rdma -s 4420 00:20:47.371 00:20:47.371 Discovery Log Number of Records 2, Generation counter 2 00:20:47.371 =====Discovery Log Entry 0====== 00:20:47.371 trtype: rdma 00:20:47.371 adrfam: ipv4 00:20:47.371 subtype: current discovery subsystem 00:20:47.371 treq: not specified, sq flow control disable supported 00:20:47.371 portid: 1 00:20:47.371 trsvcid: 4420 00:20:47.371 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:47.371 traddr: 192.168.100.8 00:20:47.371 eflags: none 00:20:47.371 rdma_prtype: not specified 00:20:47.371 rdma_qptype: connected 00:20:47.371 rdma_cms: rdma-cm 00:20:47.371 rdma_pkey: 0x0000 00:20:47.371 =====Discovery Log Entry 1====== 00:20:47.371 trtype: rdma 00:20:47.371 adrfam: ipv4 00:20:47.371 subtype: nvme subsystem 00:20:47.371 treq: not specified, sq flow control disable supported 00:20:47.371 portid: 1 00:20:47.371 trsvcid: 4420 00:20:47.371 subnqn: nqn.2024-02.io.spdk:cnode0 00:20:47.371 traddr: 192.168.100.8 00:20:47.371 eflags: none 00:20:47.371 rdma_prtype: not specified 00:20:47.371 rdma_qptype: connected 00:20:47.371 rdma_cms: rdma-cm 00:20:47.371 rdma_pkey: 0x0000 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGZjNGU0MjllMjUyYmMwZGU5NjQ5NmRiYjBhN2Y1ZTBkODg3ODE5ZmZlYTM4ZDM3mkr3TQ==: 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGZjNGU0MjllMjUyYmMwZGU5NjQ5NmRiYjBhN2Y1ZTBkODg3ODE5ZmZlYTM4ZDM3mkr3TQ==: 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: ]] 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.371 10:49:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.630 nvme0n1 00:20:47.630 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.630 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.630 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:47.630 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.630 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.630 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.630 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.630 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.630 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.630 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.630 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.630 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:47.630 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:47.630 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:47.630 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:20:47.630 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:47.630 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:47.630 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:47.630 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:47.630 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzE4MTRjNzMxNzBjNWJiNWIxNjZlNjdjMmQwMjM5NzDxif1I: 00:20:47.630 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTY1MDdkZGI5YTQ4Y2I1OGNkZWNkZmZkM2ZjNDU1NzBiMzQ2ZmY3ZTBlYmY4YTZjMTg3OGViOTFjMGIxM2RlOVVyf8A=: 00:20:47.630 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:47.630 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:47.630 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzE4MTRjNzMxNzBjNWJiNWIxNjZlNjdjMmQwMjM5NzDxif1I: 00:20:47.630 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTY1MDdkZGI5YTQ4Y2I1OGNkZWNkZmZkM2ZjNDU1NzBiMzQ2ZmY3ZTBlYmY4YTZjMTg3OGViOTFjMGIxM2RlOVVyf8A=: ]] 00:20:47.630 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTY1MDdkZGI5YTQ4Y2I1OGNkZWNkZmZkM2ZjNDU1NzBiMzQ2ZmY3ZTBlYmY4YTZjMTg3OGViOTFjMGIxM2RlOVVyf8A=: 00:20:47.630 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:20:47.630 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:47.630 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:47.630 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:47.630 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:47.630 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:47.630 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:47.630 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.630 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.630 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.630 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:47.630 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:47.630 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:47.630 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:47.630 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.630 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.630 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:47.630 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:47.630 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:47.630 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:47.630 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:47.889 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.889 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.889 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.889 nvme0n1 00:20:47.889 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.889 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.889 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:47.889 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.889 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.889 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.889 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.889 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.889 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.889 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.147 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.147 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:48.147 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:48.147 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:48.147 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:48.147 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:48.147 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:48.147 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGZjNGU0MjllMjUyYmMwZGU5NjQ5NmRiYjBhN2Y1ZTBkODg3ODE5ZmZlYTM4ZDM3mkr3TQ==: 00:20:48.147 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: 00:20:48.147 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:48.147 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:48.147 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGZjNGU0MjllMjUyYmMwZGU5NjQ5NmRiYjBhN2Y1ZTBkODg3ODE5ZmZlYTM4ZDM3mkr3TQ==: 00:20:48.147 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: ]] 00:20:48.147 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: 00:20:48.147 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:20:48.147 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:48.147 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:48.147 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:48.147 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:48.147 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:48.147 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:48.147 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.147 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.147 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.147 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:48.147 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:48.147 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:48.147 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:48.147 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:48.147 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:48.147 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:48.147 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:48.147 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:48.147 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:48.147 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:48.147 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.147 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.147 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.147 nvme0n1 00:20:48.147 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.147 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:48.147 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.147 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:48.147 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.147 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.406 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.406 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:48.406 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.406 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.406 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.406 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:48.406 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:48.406 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:48.406 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:48.406 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:48.406 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:48.406 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDM1MGQ0OGRhNmUyN2EyMDk4YTk1N2U3NDg0MDliNDib26FH: 00:20:48.406 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: 00:20:48.406 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:48.406 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:48.406 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDM1MGQ0OGRhNmUyN2EyMDk4YTk1N2U3NDg0MDliNDib26FH: 00:20:48.406 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: ]] 00:20:48.406 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: 00:20:48.406 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:20:48.406 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:48.406 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:48.406 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:48.406 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:48.406 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:48.406 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:48.406 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.406 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.406 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.406 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:48.406 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:48.406 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:48.406 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:48.406 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:48.406 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:48.406 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:48.406 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:48.406 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:48.406 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:48.406 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:48.406 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.406 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.406 10:49:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.665 nvme0n1 00:20:48.665 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.665 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:48.665 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.665 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:48.665 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.665 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.665 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.665 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:48.665 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.665 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.665 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.665 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:48.665 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:20:48.665 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:48.665 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:48.665 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:48.665 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:48.665 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODkxMTc1YzRjNDY5MzgyMTE4M2IyYjY2Mjk5ZGNjMGQyZDA5YTQ3ZjVjNTY3ODgwO5AESQ==: 00:20:48.665 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDNjZGI1Y2ZlN2U3Y2FkMDlhZTExMmFhYTgyNzVlMGPtQOsi: 00:20:48.665 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:48.665 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:48.665 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODkxMTc1YzRjNDY5MzgyMTE4M2IyYjY2Mjk5ZGNjMGQyZDA5YTQ3ZjVjNTY3ODgwO5AESQ==: 00:20:48.665 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDNjZGI1Y2ZlN2U3Y2FkMDlhZTExMmFhYTgyNzVlMGPtQOsi: ]] 00:20:48.665 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDNjZGI1Y2ZlN2U3Y2FkMDlhZTExMmFhYTgyNzVlMGPtQOsi: 00:20:48.665 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:20:48.665 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:48.665 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:48.665 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:48.665 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:48.665 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:48.665 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:48.665 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.665 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.665 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.665 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:48.665 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:48.665 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:48.665 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:48.665 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:48.665 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:48.665 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:48.665 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:48.665 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:48.665 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:48.665 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:48.665 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:48.665 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.665 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.924 nvme0n1 00:20:48.924 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.924 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:48.924 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.924 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:48.924 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.924 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.924 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.924 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:48.924 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.924 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.924 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.924 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:48.924 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:20:48.924 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:48.924 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:48.924 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:48.924 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:48.924 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UyYjI0NDE4NmRkOGU2ODQ4MDg0M2I2NjVkNTA2NTMxZDgyMWU2NWM3OTdjYjg3Njk4ZjllNDA2NDE0MmY4OZIguu4=: 00:20:48.924 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:48.924 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:48.924 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:48.924 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UyYjI0NDE4NmRkOGU2ODQ4MDg0M2I2NjVkNTA2NTMxZDgyMWU2NWM3OTdjYjg3Njk4ZjllNDA2NDE0MmY4OZIguu4=: 00:20:48.924 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:48.924 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:20:48.924 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:48.924 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:48.924 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:48.924 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:48.924 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:48.924 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:48.924 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.924 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.924 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.924 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:48.924 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:48.924 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:48.924 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:48.924 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:48.924 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:48.924 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:48.924 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:48.924 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:48.924 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:48.924 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:48.924 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:48.924 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.924 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.183 nvme0n1 00:20:49.183 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.183 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:49.183 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:49.183 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.183 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.183 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.183 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.183 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:49.183 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.183 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.183 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.183 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:49.183 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:49.183 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:20:49.183 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:49.183 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:49.183 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:49.183 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:49.183 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzE4MTRjNzMxNzBjNWJiNWIxNjZlNjdjMmQwMjM5NzDxif1I: 00:20:49.183 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTY1MDdkZGI5YTQ4Y2I1OGNkZWNkZmZkM2ZjNDU1NzBiMzQ2ZmY3ZTBlYmY4YTZjMTg3OGViOTFjMGIxM2RlOVVyf8A=: 00:20:49.183 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:49.183 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:49.183 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzE4MTRjNzMxNzBjNWJiNWIxNjZlNjdjMmQwMjM5NzDxif1I: 00:20:49.183 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTY1MDdkZGI5YTQ4Y2I1OGNkZWNkZmZkM2ZjNDU1NzBiMzQ2ZmY3ZTBlYmY4YTZjMTg3OGViOTFjMGIxM2RlOVVyf8A=: ]] 00:20:49.183 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTY1MDdkZGI5YTQ4Y2I1OGNkZWNkZmZkM2ZjNDU1NzBiMzQ2ZmY3ZTBlYmY4YTZjMTg3OGViOTFjMGIxM2RlOVVyf8A=: 00:20:49.183 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:20:49.183 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:49.183 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:49.183 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:49.183 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:49.183 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:49.183 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:49.183 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.183 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.183 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.183 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:49.183 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:49.183 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:49.183 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:49.183 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:49.183 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:49.183 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:49.183 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:49.183 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:49.183 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:49.183 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:49.183 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.183 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.183 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.442 nvme0n1 00:20:49.442 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.442 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:49.442 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.442 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.442 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:49.442 10:49:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.442 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.442 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:49.442 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.442 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.442 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.442 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:49.442 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:20:49.442 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:49.442 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:49.442 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:49.442 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:49.442 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGZjNGU0MjllMjUyYmMwZGU5NjQ5NmRiYjBhN2Y1ZTBkODg3ODE5ZmZlYTM4ZDM3mkr3TQ==: 00:20:49.442 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: 00:20:49.442 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:49.442 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:49.442 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGZjNGU0MjllMjUyYmMwZGU5NjQ5NmRiYjBhN2Y1ZTBkODg3ODE5ZmZlYTM4ZDM3mkr3TQ==: 00:20:49.442 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: ]] 00:20:49.442 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: 00:20:49.442 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:20:49.442 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:49.442 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:49.442 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:49.442 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:49.442 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:49.442 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:49.442 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.442 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.442 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.442 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:49.442 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:49.442 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:49.442 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:49.442 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:49.442 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:49.442 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:49.442 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:49.442 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:49.442 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:49.442 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:49.442 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.442 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.442 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.700 nvme0n1 00:20:49.700 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.700 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:49.700 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.700 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.700 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:49.700 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.700 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.700 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:49.700 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.700 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.958 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.958 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:49.958 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:20:49.958 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:49.958 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:49.958 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:49.959 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:49.959 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDM1MGQ0OGRhNmUyN2EyMDk4YTk1N2U3NDg0MDliNDib26FH: 00:20:49.959 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: 00:20:49.959 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:49.959 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:49.959 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDM1MGQ0OGRhNmUyN2EyMDk4YTk1N2U3NDg0MDliNDib26FH: 00:20:49.959 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: ]] 00:20:49.959 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: 00:20:49.959 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:20:49.959 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:49.959 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:49.959 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:49.959 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:49.959 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:49.959 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:49.959 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.959 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.959 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.959 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:49.959 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:49.959 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:49.959 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:49.959 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:49.959 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:49.959 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:49.959 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:49.959 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:49.959 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:49.959 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:49.959 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.959 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.959 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.218 nvme0n1 00:20:50.218 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.218 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:50.218 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:50.218 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.218 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.218 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.218 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.218 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:50.218 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.218 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.218 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.218 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:50.218 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:20:50.218 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:50.218 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:50.218 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:50.218 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:50.218 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODkxMTc1YzRjNDY5MzgyMTE4M2IyYjY2Mjk5ZGNjMGQyZDA5YTQ3ZjVjNTY3ODgwO5AESQ==: 00:20:50.218 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDNjZGI1Y2ZlN2U3Y2FkMDlhZTExMmFhYTgyNzVlMGPtQOsi: 00:20:50.218 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:50.218 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:50.218 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODkxMTc1YzRjNDY5MzgyMTE4M2IyYjY2Mjk5ZGNjMGQyZDA5YTQ3ZjVjNTY3ODgwO5AESQ==: 00:20:50.218 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDNjZGI1Y2ZlN2U3Y2FkMDlhZTExMmFhYTgyNzVlMGPtQOsi: ]] 00:20:50.218 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDNjZGI1Y2ZlN2U3Y2FkMDlhZTExMmFhYTgyNzVlMGPtQOsi: 00:20:50.218 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:20:50.218 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:50.218 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:50.218 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:50.218 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:50.218 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:50.218 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:50.218 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.218 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.218 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.218 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:50.218 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:50.218 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:50.218 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:50.218 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:50.218 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:50.218 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:50.218 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:50.218 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:50.218 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:50.218 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:50.218 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:50.218 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.218 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.477 nvme0n1 00:20:50.477 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.477 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:50.477 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:50.477 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.477 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.477 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.477 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.477 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:50.477 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.477 10:49:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.477 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.477 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:50.477 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:20:50.477 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:50.477 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:50.477 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:50.477 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:50.477 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UyYjI0NDE4NmRkOGU2ODQ4MDg0M2I2NjVkNTA2NTMxZDgyMWU2NWM3OTdjYjg3Njk4ZjllNDA2NDE0MmY4OZIguu4=: 00:20:50.477 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:50.477 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:50.477 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:50.477 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UyYjI0NDE4NmRkOGU2ODQ4MDg0M2I2NjVkNTA2NTMxZDgyMWU2NWM3OTdjYjg3Njk4ZjllNDA2NDE0MmY4OZIguu4=: 00:20:50.477 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:50.477 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:20:50.477 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:50.477 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:50.477 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:50.477 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:50.477 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:50.477 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:50.477 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.477 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.477 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.477 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:50.477 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:50.477 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:50.477 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:50.477 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:50.477 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:50.477 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:50.477 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:50.477 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:50.477 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:50.477 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:50.477 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:50.477 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.477 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.736 nvme0n1 00:20:50.736 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.736 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:50.736 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:50.736 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.736 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.736 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.736 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.736 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:50.736 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.736 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.736 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.736 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:50.736 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:50.736 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:20:50.736 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:50.736 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:50.736 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:50.736 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:50.736 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzE4MTRjNzMxNzBjNWJiNWIxNjZlNjdjMmQwMjM5NzDxif1I: 00:20:50.736 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTY1MDdkZGI5YTQ4Y2I1OGNkZWNkZmZkM2ZjNDU1NzBiMzQ2ZmY3ZTBlYmY4YTZjMTg3OGViOTFjMGIxM2RlOVVyf8A=: 00:20:50.736 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:50.736 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:50.736 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzE4MTRjNzMxNzBjNWJiNWIxNjZlNjdjMmQwMjM5NzDxif1I: 00:20:50.736 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTY1MDdkZGI5YTQ4Y2I1OGNkZWNkZmZkM2ZjNDU1NzBiMzQ2ZmY3ZTBlYmY4YTZjMTg3OGViOTFjMGIxM2RlOVVyf8A=: ]] 00:20:50.736 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTY1MDdkZGI5YTQ4Y2I1OGNkZWNkZmZkM2ZjNDU1NzBiMzQ2ZmY3ZTBlYmY4YTZjMTg3OGViOTFjMGIxM2RlOVVyf8A=: 00:20:50.736 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:20:50.736 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:50.736 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:50.736 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:50.736 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:50.736 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:50.736 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:50.736 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.736 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.736 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.736 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:50.736 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:50.736 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:50.736 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:50.736 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:50.736 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:50.736 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:50.736 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:50.736 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:50.737 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:50.737 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:50.737 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.737 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.737 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.995 nvme0n1 00:20:50.995 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.253 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:51.253 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:51.253 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.253 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.253 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.253 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.253 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:51.253 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.253 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.253 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.253 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:51.253 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:20:51.253 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:51.253 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:51.253 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:51.253 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:51.253 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGZjNGU0MjllMjUyYmMwZGU5NjQ5NmRiYjBhN2Y1ZTBkODg3ODE5ZmZlYTM4ZDM3mkr3TQ==: 00:20:51.253 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: 00:20:51.253 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:51.253 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:51.253 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGZjNGU0MjllMjUyYmMwZGU5NjQ5NmRiYjBhN2Y1ZTBkODg3ODE5ZmZlYTM4ZDM3mkr3TQ==: 00:20:51.253 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: ]] 00:20:51.253 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: 00:20:51.253 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:20:51.253 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:51.253 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:51.253 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:51.253 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:51.253 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:51.253 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:51.253 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.253 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.253 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.253 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:51.253 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:51.253 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:51.254 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:51.254 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:51.254 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:51.254 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:51.254 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:51.254 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:51.254 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:51.254 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:51.254 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.254 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.254 10:49:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.512 nvme0n1 00:20:51.512 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.512 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:51.512 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:51.512 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.512 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.512 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.512 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.512 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:51.512 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.512 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.512 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.512 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:51.512 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:20:51.512 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:51.512 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:51.512 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:51.512 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:51.513 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDM1MGQ0OGRhNmUyN2EyMDk4YTk1N2U3NDg0MDliNDib26FH: 00:20:51.513 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: 00:20:51.513 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:51.513 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:51.513 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDM1MGQ0OGRhNmUyN2EyMDk4YTk1N2U3NDg0MDliNDib26FH: 00:20:51.513 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: ]] 00:20:51.513 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: 00:20:51.513 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:20:51.513 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:51.513 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:51.513 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:51.513 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:51.513 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:51.513 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:51.513 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.513 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.513 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.513 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:51.513 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:51.513 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:51.513 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:51.513 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:51.513 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:51.513 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:51.513 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:51.513 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:51.513 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:51.513 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:51.513 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.513 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.513 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.771 nvme0n1 00:20:51.772 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.772 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:51.772 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.772 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.772 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:51.772 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.030 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.030 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:52.030 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.030 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.030 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.030 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:52.030 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:20:52.030 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:52.030 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:52.030 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:52.030 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:52.030 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODkxMTc1YzRjNDY5MzgyMTE4M2IyYjY2Mjk5ZGNjMGQyZDA5YTQ3ZjVjNTY3ODgwO5AESQ==: 00:20:52.030 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDNjZGI1Y2ZlN2U3Y2FkMDlhZTExMmFhYTgyNzVlMGPtQOsi: 00:20:52.030 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:52.030 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:52.030 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODkxMTc1YzRjNDY5MzgyMTE4M2IyYjY2Mjk5ZGNjMGQyZDA5YTQ3ZjVjNTY3ODgwO5AESQ==: 00:20:52.030 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDNjZGI1Y2ZlN2U3Y2FkMDlhZTExMmFhYTgyNzVlMGPtQOsi: ]] 00:20:52.030 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDNjZGI1Y2ZlN2U3Y2FkMDlhZTExMmFhYTgyNzVlMGPtQOsi: 00:20:52.030 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:20:52.030 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:52.030 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:52.030 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:52.030 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:52.030 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:52.030 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:52.030 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.030 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.030 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.030 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:52.030 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:52.030 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:52.031 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:52.031 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.031 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.031 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:52.031 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:52.031 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:52.031 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:52.031 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:52.031 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:52.031 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.031 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.289 nvme0n1 00:20:52.289 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.289 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:52.289 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:52.289 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.289 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.289 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.289 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.289 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:52.289 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.289 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.289 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.289 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:52.289 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:20:52.289 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:52.289 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:52.289 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:52.289 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:52.289 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UyYjI0NDE4NmRkOGU2ODQ4MDg0M2I2NjVkNTA2NTMxZDgyMWU2NWM3OTdjYjg3Njk4ZjllNDA2NDE0MmY4OZIguu4=: 00:20:52.289 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:52.289 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:52.289 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:52.289 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UyYjI0NDE4NmRkOGU2ODQ4MDg0M2I2NjVkNTA2NTMxZDgyMWU2NWM3OTdjYjg3Njk4ZjllNDA2NDE0MmY4OZIguu4=: 00:20:52.289 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:52.289 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:20:52.289 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:52.289 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:52.289 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:52.289 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:52.289 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:52.289 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:52.289 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.289 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.289 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.289 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:52.289 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:52.289 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:52.289 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:52.289 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.289 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.289 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:52.289 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:52.289 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:52.289 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:52.289 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:52.289 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:52.289 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.289 10:49:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.547 nvme0n1 00:20:52.547 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.547 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:52.547 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:52.547 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.547 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.805 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.805 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.805 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:52.805 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.805 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.805 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.805 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:52.805 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:52.805 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:20:52.805 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:52.805 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:52.805 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:52.805 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:52.805 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzE4MTRjNzMxNzBjNWJiNWIxNjZlNjdjMmQwMjM5NzDxif1I: 00:20:52.805 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTY1MDdkZGI5YTQ4Y2I1OGNkZWNkZmZkM2ZjNDU1NzBiMzQ2ZmY3ZTBlYmY4YTZjMTg3OGViOTFjMGIxM2RlOVVyf8A=: 00:20:52.805 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:52.805 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:52.805 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzE4MTRjNzMxNzBjNWJiNWIxNjZlNjdjMmQwMjM5NzDxif1I: 00:20:52.805 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTY1MDdkZGI5YTQ4Y2I1OGNkZWNkZmZkM2ZjNDU1NzBiMzQ2ZmY3ZTBlYmY4YTZjMTg3OGViOTFjMGIxM2RlOVVyf8A=: ]] 00:20:52.805 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTY1MDdkZGI5YTQ4Y2I1OGNkZWNkZmZkM2ZjNDU1NzBiMzQ2ZmY3ZTBlYmY4YTZjMTg3OGViOTFjMGIxM2RlOVVyf8A=: 00:20:52.805 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:20:52.805 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:52.805 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:52.805 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:52.805 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:52.805 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:52.805 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:52.805 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.805 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.805 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.805 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:52.805 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:52.805 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:52.805 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:52.805 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.805 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.805 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:52.805 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:52.805 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:52.805 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:52.805 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:52.805 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.805 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.805 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.063 nvme0n1 00:20:53.063 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.064 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:53.064 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:53.064 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.064 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.064 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.322 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.322 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:53.322 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.322 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.322 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.322 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:53.322 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:20:53.322 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:53.322 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:53.322 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:53.322 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:53.322 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGZjNGU0MjllMjUyYmMwZGU5NjQ5NmRiYjBhN2Y1ZTBkODg3ODE5ZmZlYTM4ZDM3mkr3TQ==: 00:20:53.322 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: 00:20:53.322 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:53.322 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:53.322 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGZjNGU0MjllMjUyYmMwZGU5NjQ5NmRiYjBhN2Y1ZTBkODg3ODE5ZmZlYTM4ZDM3mkr3TQ==: 00:20:53.322 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: ]] 00:20:53.322 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: 00:20:53.322 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:20:53.322 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:53.322 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:53.322 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:53.322 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:53.322 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:53.322 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:53.322 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.322 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.322 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.322 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:53.322 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:53.322 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:53.322 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:53.322 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:53.322 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:53.322 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:53.322 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:53.322 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:53.322 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:53.322 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:53.322 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.322 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.322 10:49:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.581 nvme0n1 00:20:53.581 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.581 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:53.581 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.581 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.581 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:53.581 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.839 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.839 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:53.839 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.839 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.839 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.839 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:53.839 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:20:53.839 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:53.839 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:53.839 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:53.839 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:53.839 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDM1MGQ0OGRhNmUyN2EyMDk4YTk1N2U3NDg0MDliNDib26FH: 00:20:53.839 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: 00:20:53.839 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:53.839 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:53.839 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDM1MGQ0OGRhNmUyN2EyMDk4YTk1N2U3NDg0MDliNDib26FH: 00:20:53.839 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: ]] 00:20:53.839 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: 00:20:53.839 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:20:53.839 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:53.839 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:53.839 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:53.839 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:53.839 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:53.839 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:53.839 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.839 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.839 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.839 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:53.839 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:53.839 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:53.839 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:53.839 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:53.839 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:53.839 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:53.839 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:53.839 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:53.839 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:53.839 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:53.839 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:53.839 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.839 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.098 nvme0n1 00:20:54.098 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.098 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:54.098 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.098 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.098 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:54.098 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.356 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.356 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:54.356 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.356 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.356 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.356 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:54.356 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:20:54.356 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:54.356 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:54.356 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:54.356 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:54.356 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODkxMTc1YzRjNDY5MzgyMTE4M2IyYjY2Mjk5ZGNjMGQyZDA5YTQ3ZjVjNTY3ODgwO5AESQ==: 00:20:54.356 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDNjZGI1Y2ZlN2U3Y2FkMDlhZTExMmFhYTgyNzVlMGPtQOsi: 00:20:54.356 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:54.356 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:54.356 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODkxMTc1YzRjNDY5MzgyMTE4M2IyYjY2Mjk5ZGNjMGQyZDA5YTQ3ZjVjNTY3ODgwO5AESQ==: 00:20:54.356 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDNjZGI1Y2ZlN2U3Y2FkMDlhZTExMmFhYTgyNzVlMGPtQOsi: ]] 00:20:54.356 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDNjZGI1Y2ZlN2U3Y2FkMDlhZTExMmFhYTgyNzVlMGPtQOsi: 00:20:54.356 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:20:54.356 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:54.356 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:54.356 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:54.356 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:54.356 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:54.356 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:54.356 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.356 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.356 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.356 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:54.356 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:54.356 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:54.356 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:54.356 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:54.356 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:54.356 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:54.356 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:54.356 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:54.356 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:54.356 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:54.356 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:54.356 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.356 10:49:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.613 nvme0n1 00:20:54.613 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.613 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:54.613 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:54.613 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.613 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.613 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.871 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.871 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:54.871 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.871 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.871 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.871 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:54.871 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:20:54.871 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:54.871 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:54.871 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:54.871 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:54.871 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UyYjI0NDE4NmRkOGU2ODQ4MDg0M2I2NjVkNTA2NTMxZDgyMWU2NWM3OTdjYjg3Njk4ZjllNDA2NDE0MmY4OZIguu4=: 00:20:54.871 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:54.871 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:54.871 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:54.871 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UyYjI0NDE4NmRkOGU2ODQ4MDg0M2I2NjVkNTA2NTMxZDgyMWU2NWM3OTdjYjg3Njk4ZjllNDA2NDE0MmY4OZIguu4=: 00:20:54.871 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:54.871 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:20:54.871 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:54.871 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:54.871 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:54.871 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:54.871 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:54.872 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:54.872 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.872 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.872 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.872 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:54.872 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:54.872 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:54.872 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:54.872 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:54.872 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:54.872 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:54.872 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:54.872 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:54.872 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:54.872 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:54.872 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:54.872 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.872 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.130 nvme0n1 00:20:55.130 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.130 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:55.130 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:55.130 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.130 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.130 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.388 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.388 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:55.388 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.388 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.388 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.388 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:55.388 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:55.388 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:20:55.388 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:55.388 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:55.388 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:55.388 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:55.388 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzE4MTRjNzMxNzBjNWJiNWIxNjZlNjdjMmQwMjM5NzDxif1I: 00:20:55.388 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTY1MDdkZGI5YTQ4Y2I1OGNkZWNkZmZkM2ZjNDU1NzBiMzQ2ZmY3ZTBlYmY4YTZjMTg3OGViOTFjMGIxM2RlOVVyf8A=: 00:20:55.388 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:55.388 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:55.388 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzE4MTRjNzMxNzBjNWJiNWIxNjZlNjdjMmQwMjM5NzDxif1I: 00:20:55.388 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTY1MDdkZGI5YTQ4Y2I1OGNkZWNkZmZkM2ZjNDU1NzBiMzQ2ZmY3ZTBlYmY4YTZjMTg3OGViOTFjMGIxM2RlOVVyf8A=: ]] 00:20:55.388 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTY1MDdkZGI5YTQ4Y2I1OGNkZWNkZmZkM2ZjNDU1NzBiMzQ2ZmY3ZTBlYmY4YTZjMTg3OGViOTFjMGIxM2RlOVVyf8A=: 00:20:55.388 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:20:55.388 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:55.388 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:55.388 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:55.389 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:55.389 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:55.389 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:55.389 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.389 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.389 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.389 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:55.389 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:55.389 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:55.389 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:55.389 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:55.389 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:55.389 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:55.389 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:55.389 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:55.389 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:55.389 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:55.389 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.389 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.389 10:49:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.955 nvme0n1 00:20:55.955 10:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.955 10:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:55.955 10:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.955 10:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.955 10:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:55.955 10:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.955 10:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.955 10:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:55.955 10:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.955 10:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.955 10:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.955 10:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:55.955 10:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:20:55.955 10:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:55.955 10:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:55.955 10:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:55.955 10:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:55.955 10:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGZjNGU0MjllMjUyYmMwZGU5NjQ5NmRiYjBhN2Y1ZTBkODg3ODE5ZmZlYTM4ZDM3mkr3TQ==: 00:20:55.955 10:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: 00:20:55.955 10:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:55.955 10:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:55.955 10:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGZjNGU0MjllMjUyYmMwZGU5NjQ5NmRiYjBhN2Y1ZTBkODg3ODE5ZmZlYTM4ZDM3mkr3TQ==: 00:20:55.955 10:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: ]] 00:20:55.955 10:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: 00:20:55.955 10:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:20:55.955 10:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:55.955 10:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:55.955 10:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:55.955 10:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:55.955 10:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:55.955 10:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:55.955 10:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.955 10:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.955 10:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.955 10:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:55.955 10:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:55.955 10:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:55.955 10:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:55.955 10:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:55.955 10:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:55.955 10:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:55.955 10:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:55.955 10:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:55.955 10:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:55.955 10:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:55.955 10:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.955 10:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.955 10:49:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.522 nvme0n1 00:20:56.522 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.522 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:56.522 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:56.522 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.522 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.522 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.780 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.780 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:56.780 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.780 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.780 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.780 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:56.780 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:20:56.780 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:56.780 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:56.780 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:56.780 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:56.780 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDM1MGQ0OGRhNmUyN2EyMDk4YTk1N2U3NDg0MDliNDib26FH: 00:20:56.780 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: 00:20:56.780 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:56.780 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:56.780 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDM1MGQ0OGRhNmUyN2EyMDk4YTk1N2U3NDg0MDliNDib26FH: 00:20:56.780 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: ]] 00:20:56.780 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: 00:20:56.780 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:20:56.780 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:56.780 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:56.780 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:56.780 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:56.780 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:56.780 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:56.780 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.780 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.780 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.780 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:56.780 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:56.780 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:56.780 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:56.780 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:56.780 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:56.780 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:56.780 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:56.780 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:56.780 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:56.780 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:56.780 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.780 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.780 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.347 nvme0n1 00:20:57.347 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.347 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:57.347 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:57.347 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.347 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.347 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.347 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.347 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:57.347 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.347 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.347 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.347 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:57.347 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:20:57.347 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:57.347 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:57.347 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:57.347 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:57.347 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODkxMTc1YzRjNDY5MzgyMTE4M2IyYjY2Mjk5ZGNjMGQyZDA5YTQ3ZjVjNTY3ODgwO5AESQ==: 00:20:57.347 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDNjZGI1Y2ZlN2U3Y2FkMDlhZTExMmFhYTgyNzVlMGPtQOsi: 00:20:57.347 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:57.347 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:57.347 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODkxMTc1YzRjNDY5MzgyMTE4M2IyYjY2Mjk5ZGNjMGQyZDA5YTQ3ZjVjNTY3ODgwO5AESQ==: 00:20:57.347 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDNjZGI1Y2ZlN2U3Y2FkMDlhZTExMmFhYTgyNzVlMGPtQOsi: ]] 00:20:57.347 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDNjZGI1Y2ZlN2U3Y2FkMDlhZTExMmFhYTgyNzVlMGPtQOsi: 00:20:57.347 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:20:57.347 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:57.348 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:57.348 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:57.348 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:57.348 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:57.348 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:57.348 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.348 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.348 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.348 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:57.348 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:57.348 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:57.348 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:57.348 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:57.348 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:57.348 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:57.348 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:57.348 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:57.348 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:57.348 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:57.348 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:57.348 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.348 10:49:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.915 nvme0n1 00:20:57.915 10:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.915 10:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:57.915 10:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.915 10:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.915 10:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:57.915 10:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.174 10:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.174 10:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:58.174 10:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.174 10:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.174 10:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.174 10:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:58.174 10:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:20:58.174 10:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:58.174 10:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:58.174 10:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:58.174 10:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:58.174 10:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UyYjI0NDE4NmRkOGU2ODQ4MDg0M2I2NjVkNTA2NTMxZDgyMWU2NWM3OTdjYjg3Njk4ZjllNDA2NDE0MmY4OZIguu4=: 00:20:58.174 10:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:58.174 10:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:58.174 10:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:58.174 10:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UyYjI0NDE4NmRkOGU2ODQ4MDg0M2I2NjVkNTA2NTMxZDgyMWU2NWM3OTdjYjg3Njk4ZjllNDA2NDE0MmY4OZIguu4=: 00:20:58.174 10:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:58.174 10:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:20:58.174 10:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:58.174 10:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:58.174 10:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:58.174 10:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:58.174 10:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:58.174 10:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:58.174 10:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.174 10:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.174 10:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.174 10:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:58.174 10:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:58.174 10:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:58.174 10:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:58.174 10:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:58.174 10:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:58.174 10:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:58.174 10:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:58.174 10:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:58.174 10:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:58.174 10:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:58.174 10:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:58.174 10:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.174 10:49:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.741 nvme0n1 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzE4MTRjNzMxNzBjNWJiNWIxNjZlNjdjMmQwMjM5NzDxif1I: 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTY1MDdkZGI5YTQ4Y2I1OGNkZWNkZmZkM2ZjNDU1NzBiMzQ2ZmY3ZTBlYmY4YTZjMTg3OGViOTFjMGIxM2RlOVVyf8A=: 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzE4MTRjNzMxNzBjNWJiNWIxNjZlNjdjMmQwMjM5NzDxif1I: 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTY1MDdkZGI5YTQ4Y2I1OGNkZWNkZmZkM2ZjNDU1NzBiMzQ2ZmY3ZTBlYmY4YTZjMTg3OGViOTFjMGIxM2RlOVVyf8A=: ]] 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTY1MDdkZGI5YTQ4Y2I1OGNkZWNkZmZkM2ZjNDU1NzBiMzQ2ZmY3ZTBlYmY4YTZjMTg3OGViOTFjMGIxM2RlOVVyf8A=: 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.741 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.000 nvme0n1 00:20:59.000 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.000 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:59.000 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:59.000 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.000 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.000 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.000 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.000 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:59.000 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.000 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.000 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.000 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:59.000 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:20:59.000 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:59.000 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:59.000 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:59.001 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:59.001 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGZjNGU0MjllMjUyYmMwZGU5NjQ5NmRiYjBhN2Y1ZTBkODg3ODE5ZmZlYTM4ZDM3mkr3TQ==: 00:20:59.001 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: 00:20:59.001 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:59.001 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:59.001 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGZjNGU0MjllMjUyYmMwZGU5NjQ5NmRiYjBhN2Y1ZTBkODg3ODE5ZmZlYTM4ZDM3mkr3TQ==: 00:20:59.001 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: ]] 00:20:59.001 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: 00:20:59.001 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:20:59.001 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:59.001 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:59.001 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:59.001 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:59.001 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:59.001 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:59.001 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.001 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.001 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.001 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:59.001 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:59.001 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:59.001 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:59.001 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:59.001 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:59.001 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:59.001 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:59.001 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:59.001 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:59.001 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:59.001 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.001 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.001 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.259 nvme0n1 00:20:59.259 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.259 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:59.259 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:59.259 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.259 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.259 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.259 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.259 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:59.259 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.259 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.259 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.259 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:59.259 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:20:59.259 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:59.259 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:59.259 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:59.259 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:59.260 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDM1MGQ0OGRhNmUyN2EyMDk4YTk1N2U3NDg0MDliNDib26FH: 00:20:59.260 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: 00:20:59.260 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:59.260 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:59.260 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDM1MGQ0OGRhNmUyN2EyMDk4YTk1N2U3NDg0MDliNDib26FH: 00:20:59.260 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: ]] 00:20:59.260 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: 00:20:59.260 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:20:59.260 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:59.260 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:59.260 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:59.260 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:59.260 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:59.260 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:59.260 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.260 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.260 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.260 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:59.260 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:59.260 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:59.260 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:59.518 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:59.518 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:59.518 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:59.518 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:59.518 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:59.518 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:59.518 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:59.518 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.518 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.518 10:49:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.518 nvme0n1 00:20:59.518 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.518 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:59.518 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.518 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:59.518 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.518 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.518 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.518 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:59.518 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.518 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.776 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.776 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:59.776 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:20:59.776 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:59.776 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:59.776 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:59.776 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:59.776 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODkxMTc1YzRjNDY5MzgyMTE4M2IyYjY2Mjk5ZGNjMGQyZDA5YTQ3ZjVjNTY3ODgwO5AESQ==: 00:20:59.776 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDNjZGI1Y2ZlN2U3Y2FkMDlhZTExMmFhYTgyNzVlMGPtQOsi: 00:20:59.776 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:59.776 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:59.776 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODkxMTc1YzRjNDY5MzgyMTE4M2IyYjY2Mjk5ZGNjMGQyZDA5YTQ3ZjVjNTY3ODgwO5AESQ==: 00:20:59.776 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDNjZGI1Y2ZlN2U3Y2FkMDlhZTExMmFhYTgyNzVlMGPtQOsi: ]] 00:20:59.776 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDNjZGI1Y2ZlN2U3Y2FkMDlhZTExMmFhYTgyNzVlMGPtQOsi: 00:20:59.777 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:20:59.777 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:59.777 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:59.777 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:59.777 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:59.777 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:59.777 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:59.777 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.777 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.777 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.777 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:59.777 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:59.777 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:59.777 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:59.777 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:59.777 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:59.777 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:59.777 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:59.777 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:59.777 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:59.777 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:59.777 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:59.777 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.777 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.777 nvme0n1 00:20:59.777 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.777 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:59.777 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:59.777 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.777 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.777 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.034 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.034 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:00.034 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.034 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.034 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.034 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:00.034 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:21:00.034 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:00.034 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:00.034 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:00.034 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:00.034 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UyYjI0NDE4NmRkOGU2ODQ4MDg0M2I2NjVkNTA2NTMxZDgyMWU2NWM3OTdjYjg3Njk4ZjllNDA2NDE0MmY4OZIguu4=: 00:21:00.034 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:00.034 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:00.034 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:00.034 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UyYjI0NDE4NmRkOGU2ODQ4MDg0M2I2NjVkNTA2NTMxZDgyMWU2NWM3OTdjYjg3Njk4ZjllNDA2NDE0MmY4OZIguu4=: 00:21:00.034 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:00.034 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:21:00.034 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:00.034 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:00.034 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:00.034 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:00.034 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:00.034 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:00.034 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.034 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.034 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.034 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:00.034 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:00.034 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:00.034 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:00.034 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:00.034 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:00.034 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:00.034 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:00.034 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:00.034 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:00.034 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:00.034 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:00.034 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.034 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.293 nvme0n1 00:21:00.293 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.293 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:00.293 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:00.293 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.293 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.293 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.293 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.293 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:00.293 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.293 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.293 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.293 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:00.293 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:00.293 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:21:00.293 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:00.293 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:00.293 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:00.293 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:00.293 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzE4MTRjNzMxNzBjNWJiNWIxNjZlNjdjMmQwMjM5NzDxif1I: 00:21:00.293 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTY1MDdkZGI5YTQ4Y2I1OGNkZWNkZmZkM2ZjNDU1NzBiMzQ2ZmY3ZTBlYmY4YTZjMTg3OGViOTFjMGIxM2RlOVVyf8A=: 00:21:00.293 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:00.293 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:00.293 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzE4MTRjNzMxNzBjNWJiNWIxNjZlNjdjMmQwMjM5NzDxif1I: 00:21:00.293 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTY1MDdkZGI5YTQ4Y2I1OGNkZWNkZmZkM2ZjNDU1NzBiMzQ2ZmY3ZTBlYmY4YTZjMTg3OGViOTFjMGIxM2RlOVVyf8A=: ]] 00:21:00.293 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTY1MDdkZGI5YTQ4Y2I1OGNkZWNkZmZkM2ZjNDU1NzBiMzQ2ZmY3ZTBlYmY4YTZjMTg3OGViOTFjMGIxM2RlOVVyf8A=: 00:21:00.293 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:21:00.293 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:00.293 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:00.293 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:00.293 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:00.293 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:00.293 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:00.293 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.293 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.293 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.293 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:00.293 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:00.293 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:00.293 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:00.293 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:00.293 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:00.293 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:00.293 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:00.293 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:00.293 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:00.293 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:00.293 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.293 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.293 10:49:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.551 nvme0n1 00:21:00.551 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.551 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:00.551 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:00.551 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.551 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.552 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.552 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.552 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:00.552 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.552 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.552 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.552 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:00.552 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:21:00.552 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:00.552 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:00.552 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:00.552 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:00.552 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGZjNGU0MjllMjUyYmMwZGU5NjQ5NmRiYjBhN2Y1ZTBkODg3ODE5ZmZlYTM4ZDM3mkr3TQ==: 00:21:00.552 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: 00:21:00.552 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:00.552 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:00.552 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGZjNGU0MjllMjUyYmMwZGU5NjQ5NmRiYjBhN2Y1ZTBkODg3ODE5ZmZlYTM4ZDM3mkr3TQ==: 00:21:00.552 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: ]] 00:21:00.552 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: 00:21:00.552 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:21:00.552 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:00.552 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:00.552 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:00.552 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:00.552 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:00.552 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:00.552 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.552 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.552 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.552 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:00.552 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:00.552 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:00.552 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:00.552 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:00.552 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:00.552 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:00.552 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:00.552 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:00.552 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:00.552 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:00.552 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.552 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.552 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.810 nvme0n1 00:21:00.810 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.810 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:00.810 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:00.810 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.810 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.810 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.811 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.811 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:00.811 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.811 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.811 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.811 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:00.811 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:21:00.811 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:00.811 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:00.811 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:00.811 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:00.811 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDM1MGQ0OGRhNmUyN2EyMDk4YTk1N2U3NDg0MDliNDib26FH: 00:21:00.811 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: 00:21:00.811 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:00.811 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:00.811 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDM1MGQ0OGRhNmUyN2EyMDk4YTk1N2U3NDg0MDliNDib26FH: 00:21:00.811 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: ]] 00:21:00.811 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: 00:21:00.811 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:21:00.811 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:00.811 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:00.811 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:00.811 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:00.811 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:00.811 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:00.811 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.811 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.811 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.811 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:00.811 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:00.811 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:00.811 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:00.811 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:00.811 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:00.811 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:00.811 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:00.811 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:00.811 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:00.811 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:00.811 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.811 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.811 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.069 nvme0n1 00:21:01.069 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.069 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:01.069 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:01.069 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.069 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.069 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.069 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.069 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:01.069 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.069 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.069 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.069 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:01.069 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:21:01.070 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:01.070 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:01.070 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:01.070 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:01.070 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODkxMTc1YzRjNDY5MzgyMTE4M2IyYjY2Mjk5ZGNjMGQyZDA5YTQ3ZjVjNTY3ODgwO5AESQ==: 00:21:01.070 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDNjZGI1Y2ZlN2U3Y2FkMDlhZTExMmFhYTgyNzVlMGPtQOsi: 00:21:01.070 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:01.070 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:01.070 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODkxMTc1YzRjNDY5MzgyMTE4M2IyYjY2Mjk5ZGNjMGQyZDA5YTQ3ZjVjNTY3ODgwO5AESQ==: 00:21:01.070 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDNjZGI1Y2ZlN2U3Y2FkMDlhZTExMmFhYTgyNzVlMGPtQOsi: ]] 00:21:01.070 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDNjZGI1Y2ZlN2U3Y2FkMDlhZTExMmFhYTgyNzVlMGPtQOsi: 00:21:01.070 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:21:01.070 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:01.070 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:01.070 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:01.070 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:01.070 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:01.070 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:01.070 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.070 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.070 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.070 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:01.070 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:01.070 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:01.070 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:01.070 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:01.070 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:01.070 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:01.070 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:01.070 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:01.070 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:01.070 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:01.070 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:01.070 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.070 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.329 nvme0n1 00:21:01.329 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.329 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:01.329 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.329 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:01.329 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.329 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.329 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.329 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:01.329 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.329 10:49:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.588 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.588 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:01.588 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:21:01.588 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:01.588 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:01.588 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:01.588 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:01.588 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UyYjI0NDE4NmRkOGU2ODQ4MDg0M2I2NjVkNTA2NTMxZDgyMWU2NWM3OTdjYjg3Njk4ZjllNDA2NDE0MmY4OZIguu4=: 00:21:01.588 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:01.588 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:01.588 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:01.588 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UyYjI0NDE4NmRkOGU2ODQ4MDg0M2I2NjVkNTA2NTMxZDgyMWU2NWM3OTdjYjg3Njk4ZjllNDA2NDE0MmY4OZIguu4=: 00:21:01.588 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:01.588 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:21:01.588 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:01.588 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:01.588 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:01.588 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:01.588 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:01.588 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:01.588 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.588 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.588 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.588 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:01.588 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:01.588 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:01.588 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:01.588 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:01.588 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:01.588 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:01.588 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:01.588 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:01.588 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:01.588 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:01.588 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:01.588 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.588 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.847 nvme0n1 00:21:01.847 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.847 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:01.847 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:01.847 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.847 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.847 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.847 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.847 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:01.847 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.847 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.847 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.847 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:01.847 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:01.847 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:21:01.847 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:01.847 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:01.847 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:01.847 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:01.847 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzE4MTRjNzMxNzBjNWJiNWIxNjZlNjdjMmQwMjM5NzDxif1I: 00:21:01.847 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTY1MDdkZGI5YTQ4Y2I1OGNkZWNkZmZkM2ZjNDU1NzBiMzQ2ZmY3ZTBlYmY4YTZjMTg3OGViOTFjMGIxM2RlOVVyf8A=: 00:21:01.847 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:01.847 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:01.847 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzE4MTRjNzMxNzBjNWJiNWIxNjZlNjdjMmQwMjM5NzDxif1I: 00:21:01.847 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTY1MDdkZGI5YTQ4Y2I1OGNkZWNkZmZkM2ZjNDU1NzBiMzQ2ZmY3ZTBlYmY4YTZjMTg3OGViOTFjMGIxM2RlOVVyf8A=: ]] 00:21:01.847 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTY1MDdkZGI5YTQ4Y2I1OGNkZWNkZmZkM2ZjNDU1NzBiMzQ2ZmY3ZTBlYmY4YTZjMTg3OGViOTFjMGIxM2RlOVVyf8A=: 00:21:01.847 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:21:01.847 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:01.847 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:01.847 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:01.847 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:01.847 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:01.847 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:01.847 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.847 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.847 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.847 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:01.847 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:01.847 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:01.847 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:01.847 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:01.847 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:01.847 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:01.847 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:01.847 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:01.847 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:01.847 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:01.847 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.847 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.847 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.106 nvme0n1 00:21:02.106 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.106 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:02.106 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:02.106 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.106 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.106 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.106 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.106 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:02.106 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.106 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.106 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.106 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:02.106 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:21:02.106 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:02.106 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:02.106 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:02.106 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:02.106 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGZjNGU0MjllMjUyYmMwZGU5NjQ5NmRiYjBhN2Y1ZTBkODg3ODE5ZmZlYTM4ZDM3mkr3TQ==: 00:21:02.106 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: 00:21:02.106 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:02.106 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:02.106 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGZjNGU0MjllMjUyYmMwZGU5NjQ5NmRiYjBhN2Y1ZTBkODg3ODE5ZmZlYTM4ZDM3mkr3TQ==: 00:21:02.106 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: ]] 00:21:02.106 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: 00:21:02.106 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:21:02.106 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:02.106 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:02.106 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:02.106 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:02.106 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:02.106 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:02.106 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.106 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.106 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.106 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:02.106 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:02.106 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:02.106 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:02.106 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:02.106 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:02.106 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:02.106 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:02.106 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:02.106 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:02.106 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:02.106 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.106 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.106 10:49:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.364 nvme0n1 00:21:02.364 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.364 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:02.365 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:02.365 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.365 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.623 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.623 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.623 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:02.623 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.623 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.623 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.623 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:02.623 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:21:02.623 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:02.623 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:02.623 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:02.623 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:02.623 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDM1MGQ0OGRhNmUyN2EyMDk4YTk1N2U3NDg0MDliNDib26FH: 00:21:02.623 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: 00:21:02.623 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:02.623 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:02.623 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDM1MGQ0OGRhNmUyN2EyMDk4YTk1N2U3NDg0MDliNDib26FH: 00:21:02.623 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: ]] 00:21:02.623 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: 00:21:02.623 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:21:02.623 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:02.623 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:02.623 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:02.623 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:02.623 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:02.623 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:02.623 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.623 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.623 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.623 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:02.623 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:02.623 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:02.623 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:02.623 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:02.623 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:02.623 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:02.623 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:02.623 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:02.623 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:02.623 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:02.623 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.623 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.623 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.882 nvme0n1 00:21:02.882 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.882 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:02.882 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:02.882 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.882 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.882 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.882 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.882 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:02.882 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.882 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.882 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.882 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:02.882 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:21:02.882 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:02.882 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:02.882 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:02.882 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:02.882 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODkxMTc1YzRjNDY5MzgyMTE4M2IyYjY2Mjk5ZGNjMGQyZDA5YTQ3ZjVjNTY3ODgwO5AESQ==: 00:21:02.882 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDNjZGI1Y2ZlN2U3Y2FkMDlhZTExMmFhYTgyNzVlMGPtQOsi: 00:21:02.882 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:02.882 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:02.882 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODkxMTc1YzRjNDY5MzgyMTE4M2IyYjY2Mjk5ZGNjMGQyZDA5YTQ3ZjVjNTY3ODgwO5AESQ==: 00:21:02.882 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDNjZGI1Y2ZlN2U3Y2FkMDlhZTExMmFhYTgyNzVlMGPtQOsi: ]] 00:21:02.882 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDNjZGI1Y2ZlN2U3Y2FkMDlhZTExMmFhYTgyNzVlMGPtQOsi: 00:21:02.882 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:21:02.882 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:02.882 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:02.882 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:02.882 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:02.882 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:02.882 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:02.882 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.882 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.882 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.882 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:02.882 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:02.882 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:02.882 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:02.882 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:02.882 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:02.882 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:02.882 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:02.882 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:02.882 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:02.882 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:02.882 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:02.882 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.882 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.140 nvme0n1 00:21:03.141 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.141 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:03.141 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.141 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:03.141 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.141 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.399 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.399 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:03.399 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.399 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.399 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.399 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:03.399 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:21:03.399 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:03.399 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:03.399 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:03.399 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:03.399 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UyYjI0NDE4NmRkOGU2ODQ4MDg0M2I2NjVkNTA2NTMxZDgyMWU2NWM3OTdjYjg3Njk4ZjllNDA2NDE0MmY4OZIguu4=: 00:21:03.399 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:03.399 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:03.399 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:03.399 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UyYjI0NDE4NmRkOGU2ODQ4MDg0M2I2NjVkNTA2NTMxZDgyMWU2NWM3OTdjYjg3Njk4ZjllNDA2NDE0MmY4OZIguu4=: 00:21:03.399 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:03.399 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:21:03.399 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:03.399 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:03.399 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:03.399 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:03.399 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:03.399 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:03.399 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.399 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.399 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.399 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:03.399 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:03.399 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:03.399 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:03.399 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:03.399 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:03.399 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:03.399 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:03.399 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:03.399 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:03.399 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:03.399 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:03.399 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.399 10:49:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.658 nvme0n1 00:21:03.658 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.658 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:03.658 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:03.658 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.658 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.658 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.658 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.658 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:03.658 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.658 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.658 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.658 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:03.658 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:03.658 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:21:03.658 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:03.658 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:03.658 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:03.658 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:03.658 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzE4MTRjNzMxNzBjNWJiNWIxNjZlNjdjMmQwMjM5NzDxif1I: 00:21:03.658 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTY1MDdkZGI5YTQ4Y2I1OGNkZWNkZmZkM2ZjNDU1NzBiMzQ2ZmY3ZTBlYmY4YTZjMTg3OGViOTFjMGIxM2RlOVVyf8A=: 00:21:03.658 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:03.658 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:03.658 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzE4MTRjNzMxNzBjNWJiNWIxNjZlNjdjMmQwMjM5NzDxif1I: 00:21:03.658 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTY1MDdkZGI5YTQ4Y2I1OGNkZWNkZmZkM2ZjNDU1NzBiMzQ2ZmY3ZTBlYmY4YTZjMTg3OGViOTFjMGIxM2RlOVVyf8A=: ]] 00:21:03.658 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTY1MDdkZGI5YTQ4Y2I1OGNkZWNkZmZkM2ZjNDU1NzBiMzQ2ZmY3ZTBlYmY4YTZjMTg3OGViOTFjMGIxM2RlOVVyf8A=: 00:21:03.658 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:21:03.658 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:03.658 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:03.658 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:03.658 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:03.658 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:03.658 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:03.658 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.658 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.658 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.658 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:03.658 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:03.658 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:03.658 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:03.658 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:03.658 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:03.658 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:03.658 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:03.658 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:03.658 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:03.658 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:03.658 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.658 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.658 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.225 nvme0n1 00:21:04.225 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.225 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:04.225 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.225 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:04.225 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.225 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.225 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.225 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:04.225 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.225 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.225 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.225 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:04.225 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:21:04.225 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:04.225 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:04.225 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:04.225 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:04.225 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGZjNGU0MjllMjUyYmMwZGU5NjQ5NmRiYjBhN2Y1ZTBkODg3ODE5ZmZlYTM4ZDM3mkr3TQ==: 00:21:04.225 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: 00:21:04.225 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:04.225 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:04.225 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGZjNGU0MjllMjUyYmMwZGU5NjQ5NmRiYjBhN2Y1ZTBkODg3ODE5ZmZlYTM4ZDM3mkr3TQ==: 00:21:04.225 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: ]] 00:21:04.225 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: 00:21:04.225 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:21:04.225 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:04.225 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:04.225 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:04.225 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:04.225 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:04.225 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:04.225 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.225 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.225 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.225 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:04.225 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:04.225 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:04.225 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:04.225 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:04.225 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:04.225 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:04.225 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:04.225 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:04.225 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:04.225 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:04.225 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.225 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.225 10:49:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.792 nvme0n1 00:21:04.792 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.792 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:04.792 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.793 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:04.793 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.793 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.793 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.793 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:04.793 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.793 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.793 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.793 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:04.793 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:21:04.793 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:04.793 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:04.793 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:04.793 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:04.793 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDM1MGQ0OGRhNmUyN2EyMDk4YTk1N2U3NDg0MDliNDib26FH: 00:21:04.793 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: 00:21:04.793 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:04.793 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:04.793 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDM1MGQ0OGRhNmUyN2EyMDk4YTk1N2U3NDg0MDliNDib26FH: 00:21:04.793 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: ]] 00:21:04.793 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: 00:21:04.793 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:21:04.793 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:04.793 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:04.793 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:04.793 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:04.793 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:04.793 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:04.793 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.793 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.793 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.793 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:04.793 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:04.793 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:04.793 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:04.793 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:04.793 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:04.793 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:04.793 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:04.793 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:04.793 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:04.793 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:04.793 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.793 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.793 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.051 nvme0n1 00:21:05.051 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.051 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:05.051 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:05.051 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.051 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.051 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.051 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.051 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:05.051 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.051 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.309 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.309 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:05.309 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:21:05.309 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:05.309 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:05.309 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:05.309 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:05.309 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODkxMTc1YzRjNDY5MzgyMTE4M2IyYjY2Mjk5ZGNjMGQyZDA5YTQ3ZjVjNTY3ODgwO5AESQ==: 00:21:05.309 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDNjZGI1Y2ZlN2U3Y2FkMDlhZTExMmFhYTgyNzVlMGPtQOsi: 00:21:05.309 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:05.309 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:05.309 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODkxMTc1YzRjNDY5MzgyMTE4M2IyYjY2Mjk5ZGNjMGQyZDA5YTQ3ZjVjNTY3ODgwO5AESQ==: 00:21:05.309 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDNjZGI1Y2ZlN2U3Y2FkMDlhZTExMmFhYTgyNzVlMGPtQOsi: ]] 00:21:05.309 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDNjZGI1Y2ZlN2U3Y2FkMDlhZTExMmFhYTgyNzVlMGPtQOsi: 00:21:05.309 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:21:05.309 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:05.309 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:05.309 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:05.309 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:05.309 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:05.309 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:05.309 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.309 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.309 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.309 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:05.309 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:05.309 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:05.309 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:05.309 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:05.309 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:05.309 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:05.309 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:05.309 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:05.309 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:05.309 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:05.309 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:05.309 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.309 10:49:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.568 nvme0n1 00:21:05.568 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.568 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:05.568 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:05.568 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.568 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.568 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.568 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.568 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:05.568 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.568 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.568 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.568 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:05.568 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:21:05.568 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:05.568 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:05.568 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:05.568 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:05.568 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UyYjI0NDE4NmRkOGU2ODQ4MDg0M2I2NjVkNTA2NTMxZDgyMWU2NWM3OTdjYjg3Njk4ZjllNDA2NDE0MmY4OZIguu4=: 00:21:05.568 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:05.568 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:05.568 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:05.568 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UyYjI0NDE4NmRkOGU2ODQ4MDg0M2I2NjVkNTA2NTMxZDgyMWU2NWM3OTdjYjg3Njk4ZjllNDA2NDE0MmY4OZIguu4=: 00:21:05.568 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:05.568 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:21:05.568 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:05.568 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:05.568 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:05.568 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:05.568 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:05.568 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:05.568 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.568 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.568 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.568 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:05.568 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:05.568 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:05.568 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:05.568 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:05.568 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:05.568 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:05.568 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:05.568 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:05.568 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:05.568 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:05.827 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:05.827 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.827 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.085 nvme0n1 00:21:06.085 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.085 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.085 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.085 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.085 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.085 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.085 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.085 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.085 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.085 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.085 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.085 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:06.085 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:06.085 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:21:06.085 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:06.085 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:06.085 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:06.085 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:06.085 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzE4MTRjNzMxNzBjNWJiNWIxNjZlNjdjMmQwMjM5NzDxif1I: 00:21:06.085 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTY1MDdkZGI5YTQ4Y2I1OGNkZWNkZmZkM2ZjNDU1NzBiMzQ2ZmY3ZTBlYmY4YTZjMTg3OGViOTFjMGIxM2RlOVVyf8A=: 00:21:06.085 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:06.085 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:06.085 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzE4MTRjNzMxNzBjNWJiNWIxNjZlNjdjMmQwMjM5NzDxif1I: 00:21:06.085 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTY1MDdkZGI5YTQ4Y2I1OGNkZWNkZmZkM2ZjNDU1NzBiMzQ2ZmY3ZTBlYmY4YTZjMTg3OGViOTFjMGIxM2RlOVVyf8A=: ]] 00:21:06.085 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTY1MDdkZGI5YTQ4Y2I1OGNkZWNkZmZkM2ZjNDU1NzBiMzQ2ZmY3ZTBlYmY4YTZjMTg3OGViOTFjMGIxM2RlOVVyf8A=: 00:21:06.085 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:21:06.085 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:06.085 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:06.085 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:06.085 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:06.085 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:06.085 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:06.085 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.085 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.085 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.085 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:06.085 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:06.085 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:06.085 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:06.085 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.085 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.085 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:06.085 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:06.085 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:06.085 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:06.085 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:06.085 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.086 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.086 10:49:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.023 nvme0n1 00:21:07.023 10:49:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.023 10:49:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:07.023 10:49:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:07.023 10:49:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.023 10:49:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.023 10:49:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.023 10:49:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.023 10:49:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:07.023 10:49:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.023 10:49:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.023 10:49:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.023 10:49:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:07.023 10:49:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:21:07.023 10:49:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:07.023 10:49:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:07.023 10:49:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:07.023 10:49:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:07.023 10:49:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGZjNGU0MjllMjUyYmMwZGU5NjQ5NmRiYjBhN2Y1ZTBkODg3ODE5ZmZlYTM4ZDM3mkr3TQ==: 00:21:07.023 10:49:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: 00:21:07.023 10:49:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:07.023 10:49:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:07.023 10:49:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGZjNGU0MjllMjUyYmMwZGU5NjQ5NmRiYjBhN2Y1ZTBkODg3ODE5ZmZlYTM4ZDM3mkr3TQ==: 00:21:07.023 10:49:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: ]] 00:21:07.023 10:49:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: 00:21:07.023 10:49:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:21:07.023 10:49:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:07.023 10:49:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:07.023 10:49:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:07.023 10:49:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:07.023 10:49:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:07.023 10:49:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:07.023 10:49:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.023 10:49:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.023 10:49:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.023 10:49:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:07.023 10:49:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:07.023 10:49:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:07.023 10:49:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:07.024 10:49:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:07.024 10:49:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:07.024 10:49:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:07.024 10:49:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:07.024 10:49:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:07.024 10:49:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:07.024 10:49:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:07.024 10:49:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.024 10:49:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.024 10:49:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.591 nvme0n1 00:21:07.591 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.591 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:07.591 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:07.591 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.591 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.591 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.591 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.591 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:07.591 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.591 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.591 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.591 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:07.591 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:21:07.591 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:07.591 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:07.591 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:07.591 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:07.591 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDM1MGQ0OGRhNmUyN2EyMDk4YTk1N2U3NDg0MDliNDib26FH: 00:21:07.591 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: 00:21:07.591 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:07.591 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:07.591 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDM1MGQ0OGRhNmUyN2EyMDk4YTk1N2U3NDg0MDliNDib26FH: 00:21:07.591 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: ]] 00:21:07.591 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: 00:21:07.591 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:21:07.591 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:07.591 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:07.591 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:07.591 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:07.591 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:07.591 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:07.591 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.591 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.591 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.591 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:07.591 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:07.591 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:07.591 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:07.591 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:07.591 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:07.591 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:07.591 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:07.591 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:07.591 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:07.591 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:07.591 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.591 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.591 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.158 nvme0n1 00:21:08.158 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.158 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:08.158 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:08.158 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.158 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.158 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.158 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.158 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:08.158 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.158 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.158 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.158 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:08.158 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:21:08.158 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:08.158 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:08.158 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:08.158 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:08.158 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODkxMTc1YzRjNDY5MzgyMTE4M2IyYjY2Mjk5ZGNjMGQyZDA5YTQ3ZjVjNTY3ODgwO5AESQ==: 00:21:08.158 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDNjZGI1Y2ZlN2U3Y2FkMDlhZTExMmFhYTgyNzVlMGPtQOsi: 00:21:08.158 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:08.158 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:08.158 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODkxMTc1YzRjNDY5MzgyMTE4M2IyYjY2Mjk5ZGNjMGQyZDA5YTQ3ZjVjNTY3ODgwO5AESQ==: 00:21:08.158 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDNjZGI1Y2ZlN2U3Y2FkMDlhZTExMmFhYTgyNzVlMGPtQOsi: ]] 00:21:08.158 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDNjZGI1Y2ZlN2U3Y2FkMDlhZTExMmFhYTgyNzVlMGPtQOsi: 00:21:08.158 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:21:08.158 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:08.158 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:08.158 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:08.158 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:08.158 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:08.158 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:08.158 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.158 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.158 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.158 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:08.158 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:08.158 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:08.158 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:08.158 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:08.158 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:08.158 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:08.158 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:08.158 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:08.158 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:08.158 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:08.158 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:08.158 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.158 10:49:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.724 nvme0n1 00:21:08.724 10:49:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.983 10:49:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:08.983 10:49:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:08.983 10:49:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.983 10:49:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.983 10:49:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.983 10:49:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.983 10:49:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:08.983 10:49:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.983 10:49:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.983 10:49:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.983 10:49:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:08.983 10:49:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:21:08.983 10:49:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:08.983 10:49:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:08.983 10:49:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:08.983 10:49:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:08.983 10:49:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UyYjI0NDE4NmRkOGU2ODQ4MDg0M2I2NjVkNTA2NTMxZDgyMWU2NWM3OTdjYjg3Njk4ZjllNDA2NDE0MmY4OZIguu4=: 00:21:08.983 10:49:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:08.983 10:49:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:08.983 10:49:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:08.983 10:49:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UyYjI0NDE4NmRkOGU2ODQ4MDg0M2I2NjVkNTA2NTMxZDgyMWU2NWM3OTdjYjg3Njk4ZjllNDA2NDE0MmY4OZIguu4=: 00:21:08.983 10:49:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:08.983 10:49:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:21:08.983 10:49:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:08.983 10:49:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:08.983 10:49:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:08.983 10:49:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:08.983 10:49:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:08.983 10:49:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:08.983 10:49:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.983 10:49:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.983 10:49:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.983 10:49:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:08.983 10:49:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:08.983 10:49:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:08.983 10:49:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:08.983 10:49:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:08.983 10:49:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:08.983 10:49:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:08.983 10:49:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:08.983 10:49:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:08.983 10:49:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:08.983 10:49:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:08.983 10:49:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:08.983 10:49:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.983 10:49:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.549 nvme0n1 00:21:09.549 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.549 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:09.549 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:09.549 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.549 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.549 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.549 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.549 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:09.549 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.549 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.549 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.549 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:21:09.549 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:09.549 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:09.549 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:21:09.549 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:09.549 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:09.549 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:09.549 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:09.549 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzE4MTRjNzMxNzBjNWJiNWIxNjZlNjdjMmQwMjM5NzDxif1I: 00:21:09.549 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTY1MDdkZGI5YTQ4Y2I1OGNkZWNkZmZkM2ZjNDU1NzBiMzQ2ZmY3ZTBlYmY4YTZjMTg3OGViOTFjMGIxM2RlOVVyf8A=: 00:21:09.549 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:09.549 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:09.549 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzE4MTRjNzMxNzBjNWJiNWIxNjZlNjdjMmQwMjM5NzDxif1I: 00:21:09.549 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTY1MDdkZGI5YTQ4Y2I1OGNkZWNkZmZkM2ZjNDU1NzBiMzQ2ZmY3ZTBlYmY4YTZjMTg3OGViOTFjMGIxM2RlOVVyf8A=: ]] 00:21:09.550 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTY1MDdkZGI5YTQ4Y2I1OGNkZWNkZmZkM2ZjNDU1NzBiMzQ2ZmY3ZTBlYmY4YTZjMTg3OGViOTFjMGIxM2RlOVVyf8A=: 00:21:09.550 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:21:09.550 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:09.550 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:09.550 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:09.550 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:09.550 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:09.550 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:09.550 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.550 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.550 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.550 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:09.550 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:09.550 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:09.550 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:09.550 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:09.550 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:09.550 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:09.550 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:09.550 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:09.550 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:09.550 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:09.550 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.550 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.550 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.808 nvme0n1 00:21:09.808 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.808 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:09.808 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:09.808 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.808 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.808 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.808 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.808 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:09.808 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.808 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.808 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.808 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:09.808 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:21:09.808 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:09.808 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:09.808 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:09.808 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:09.808 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGZjNGU0MjllMjUyYmMwZGU5NjQ5NmRiYjBhN2Y1ZTBkODg3ODE5ZmZlYTM4ZDM3mkr3TQ==: 00:21:09.808 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: 00:21:09.808 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:09.808 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:09.808 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGZjNGU0MjllMjUyYmMwZGU5NjQ5NmRiYjBhN2Y1ZTBkODg3ODE5ZmZlYTM4ZDM3mkr3TQ==: 00:21:09.808 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: ]] 00:21:09.808 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: 00:21:09.808 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:21:09.808 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:09.808 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:09.808 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:09.808 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:09.808 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:09.808 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:09.808 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.808 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.808 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.808 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:09.808 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:09.808 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:09.808 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:09.808 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:09.808 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:09.808 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:09.808 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:09.808 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:09.808 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:09.808 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:09.808 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.808 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.808 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.065 nvme0n1 00:21:10.065 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.065 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:10.065 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:10.065 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.065 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.065 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.065 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.065 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:10.065 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.065 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.065 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.065 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:10.065 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:21:10.065 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:10.065 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:10.065 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:10.065 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:10.065 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDM1MGQ0OGRhNmUyN2EyMDk4YTk1N2U3NDg0MDliNDib26FH: 00:21:10.065 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: 00:21:10.065 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:10.065 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:10.065 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDM1MGQ0OGRhNmUyN2EyMDk4YTk1N2U3NDg0MDliNDib26FH: 00:21:10.065 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: ]] 00:21:10.065 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: 00:21:10.065 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:21:10.065 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:10.065 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:10.065 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:10.065 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:10.065 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:10.065 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:10.065 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.065 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.065 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.065 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:10.065 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:10.065 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:10.065 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:10.065 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:10.065 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:10.065 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:10.065 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:10.065 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:10.065 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:10.065 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:10.065 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.065 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.065 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.323 nvme0n1 00:21:10.323 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.323 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:10.323 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.323 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:10.323 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.323 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.323 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.323 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:10.323 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.323 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.581 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.581 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:10.581 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:21:10.581 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:10.581 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:10.581 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:10.581 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:10.581 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODkxMTc1YzRjNDY5MzgyMTE4M2IyYjY2Mjk5ZGNjMGQyZDA5YTQ3ZjVjNTY3ODgwO5AESQ==: 00:21:10.581 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDNjZGI1Y2ZlN2U3Y2FkMDlhZTExMmFhYTgyNzVlMGPtQOsi: 00:21:10.581 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:10.581 10:49:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:10.581 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODkxMTc1YzRjNDY5MzgyMTE4M2IyYjY2Mjk5ZGNjMGQyZDA5YTQ3ZjVjNTY3ODgwO5AESQ==: 00:21:10.581 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDNjZGI1Y2ZlN2U3Y2FkMDlhZTExMmFhYTgyNzVlMGPtQOsi: ]] 00:21:10.581 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDNjZGI1Y2ZlN2U3Y2FkMDlhZTExMmFhYTgyNzVlMGPtQOsi: 00:21:10.581 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:21:10.581 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:10.581 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:10.581 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:10.581 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:10.581 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:10.581 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:10.581 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.581 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.581 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.581 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:10.581 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:10.581 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:10.581 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:10.581 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:10.581 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:10.581 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:10.581 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:10.581 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:10.581 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:10.581 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:10.581 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:10.581 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.581 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.581 nvme0n1 00:21:10.581 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.581 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:10.581 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:10.581 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.581 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.581 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.840 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.840 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:10.840 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.840 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.840 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.840 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:10.840 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:21:10.840 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:10.840 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:10.840 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:10.840 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:10.840 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UyYjI0NDE4NmRkOGU2ODQ4MDg0M2I2NjVkNTA2NTMxZDgyMWU2NWM3OTdjYjg3Njk4ZjllNDA2NDE0MmY4OZIguu4=: 00:21:10.840 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:10.840 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:10.840 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:10.840 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UyYjI0NDE4NmRkOGU2ODQ4MDg0M2I2NjVkNTA2NTMxZDgyMWU2NWM3OTdjYjg3Njk4ZjllNDA2NDE0MmY4OZIguu4=: 00:21:10.840 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:10.840 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:21:10.840 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:10.840 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:10.840 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:10.840 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:10.840 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:10.840 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:10.840 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.840 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.840 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.840 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:10.840 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:10.840 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:10.840 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:10.840 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:10.840 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:10.840 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:10.840 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:10.840 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:10.840 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:10.840 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:10.840 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:10.840 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.840 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.840 nvme0n1 00:21:10.840 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.840 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:10.840 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:10.840 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.840 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.098 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.098 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.098 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:11.098 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.098 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.098 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.098 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:11.098 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:11.098 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:21:11.098 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:11.098 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:11.098 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:11.098 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:11.098 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzE4MTRjNzMxNzBjNWJiNWIxNjZlNjdjMmQwMjM5NzDxif1I: 00:21:11.098 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTY1MDdkZGI5YTQ4Y2I1OGNkZWNkZmZkM2ZjNDU1NzBiMzQ2ZmY3ZTBlYmY4YTZjMTg3OGViOTFjMGIxM2RlOVVyf8A=: 00:21:11.098 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:11.098 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:11.098 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzE4MTRjNzMxNzBjNWJiNWIxNjZlNjdjMmQwMjM5NzDxif1I: 00:21:11.098 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTY1MDdkZGI5YTQ4Y2I1OGNkZWNkZmZkM2ZjNDU1NzBiMzQ2ZmY3ZTBlYmY4YTZjMTg3OGViOTFjMGIxM2RlOVVyf8A=: ]] 00:21:11.098 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTY1MDdkZGI5YTQ4Y2I1OGNkZWNkZmZkM2ZjNDU1NzBiMzQ2ZmY3ZTBlYmY4YTZjMTg3OGViOTFjMGIxM2RlOVVyf8A=: 00:21:11.098 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:21:11.098 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:11.098 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:11.098 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:11.098 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:11.098 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:11.098 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:11.098 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.098 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.098 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.098 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:11.098 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:11.098 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:11.098 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:11.098 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:11.098 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:11.098 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:11.098 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:11.098 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:11.098 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:11.098 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:11.098 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.098 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.098 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.357 nvme0n1 00:21:11.357 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.357 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:11.357 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:11.357 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.357 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.357 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.357 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.357 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:11.357 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.357 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.357 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.357 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:11.357 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:21:11.357 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:11.357 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:11.357 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:11.357 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:11.357 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGZjNGU0MjllMjUyYmMwZGU5NjQ5NmRiYjBhN2Y1ZTBkODg3ODE5ZmZlYTM4ZDM3mkr3TQ==: 00:21:11.357 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: 00:21:11.357 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:11.357 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:11.357 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGZjNGU0MjllMjUyYmMwZGU5NjQ5NmRiYjBhN2Y1ZTBkODg3ODE5ZmZlYTM4ZDM3mkr3TQ==: 00:21:11.357 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: ]] 00:21:11.357 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: 00:21:11.357 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:21:11.357 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:11.357 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:11.357 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:11.357 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:11.357 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:11.357 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:11.357 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.357 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.357 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.357 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:11.357 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:11.357 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:11.357 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:11.358 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:11.358 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:11.358 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:11.358 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:11.358 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:11.358 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:11.358 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:11.358 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.358 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.358 10:49:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.616 nvme0n1 00:21:11.616 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.616 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:11.616 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:11.616 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.616 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.616 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.616 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.616 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:11.616 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.616 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.616 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.616 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:11.616 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:21:11.616 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:11.616 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:11.617 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:11.617 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:11.617 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDM1MGQ0OGRhNmUyN2EyMDk4YTk1N2U3NDg0MDliNDib26FH: 00:21:11.617 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: 00:21:11.617 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:11.617 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:11.617 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDM1MGQ0OGRhNmUyN2EyMDk4YTk1N2U3NDg0MDliNDib26FH: 00:21:11.617 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: ]] 00:21:11.617 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: 00:21:11.617 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:21:11.617 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:11.617 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:11.617 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:11.617 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:11.617 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:11.617 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:11.617 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.617 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.617 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.617 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:11.617 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:11.617 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:11.617 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:11.617 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:11.617 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:11.617 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:11.617 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:11.617 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:11.617 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:11.617 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:11.617 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.617 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.617 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.879 nvme0n1 00:21:11.879 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.879 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:11.879 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:11.879 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.879 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.879 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.879 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.879 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:11.879 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.879 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.879 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.879 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:11.879 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:21:11.879 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:11.879 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:11.879 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:11.879 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:11.879 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODkxMTc1YzRjNDY5MzgyMTE4M2IyYjY2Mjk5ZGNjMGQyZDA5YTQ3ZjVjNTY3ODgwO5AESQ==: 00:21:11.879 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDNjZGI1Y2ZlN2U3Y2FkMDlhZTExMmFhYTgyNzVlMGPtQOsi: 00:21:11.879 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:11.879 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:11.879 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODkxMTc1YzRjNDY5MzgyMTE4M2IyYjY2Mjk5ZGNjMGQyZDA5YTQ3ZjVjNTY3ODgwO5AESQ==: 00:21:11.879 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDNjZGI1Y2ZlN2U3Y2FkMDlhZTExMmFhYTgyNzVlMGPtQOsi: ]] 00:21:11.879 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDNjZGI1Y2ZlN2U3Y2FkMDlhZTExMmFhYTgyNzVlMGPtQOsi: 00:21:11.879 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:21:11.879 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:11.879 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:11.879 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:11.879 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:11.879 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:11.879 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:11.879 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.879 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.879 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.879 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:11.879 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:11.879 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:11.879 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:11.879 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:11.879 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:11.879 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:11.879 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:11.879 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:11.879 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:11.879 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:11.879 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:11.879 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.879 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.240 nvme0n1 00:21:12.240 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.240 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:12.240 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.240 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:12.240 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.240 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.240 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.240 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:12.240 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.240 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.240 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.240 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:12.240 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:21:12.240 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:12.240 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:12.240 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:12.240 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:12.240 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UyYjI0NDE4NmRkOGU2ODQ4MDg0M2I2NjVkNTA2NTMxZDgyMWU2NWM3OTdjYjg3Njk4ZjllNDA2NDE0MmY4OZIguu4=: 00:21:12.240 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:12.240 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:12.240 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:12.240 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UyYjI0NDE4NmRkOGU2ODQ4MDg0M2I2NjVkNTA2NTMxZDgyMWU2NWM3OTdjYjg3Njk4ZjllNDA2NDE0MmY4OZIguu4=: 00:21:12.240 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:12.240 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:21:12.240 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:12.240 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:12.240 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:12.240 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:12.240 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:12.240 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:12.240 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.240 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.240 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.240 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:12.240 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:12.240 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:12.240 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:12.240 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:12.240 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:12.240 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:12.240 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:12.240 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:12.240 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:12.240 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:12.240 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:12.240 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.240 10:49:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.499 nvme0n1 00:21:12.499 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.499 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:12.499 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:12.499 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.499 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.499 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.499 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.499 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:12.499 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.499 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.499 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.499 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:12.499 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:12.499 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:21:12.499 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:12.499 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:12.499 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:12.499 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:12.499 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzE4MTRjNzMxNzBjNWJiNWIxNjZlNjdjMmQwMjM5NzDxif1I: 00:21:12.499 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTY1MDdkZGI5YTQ4Y2I1OGNkZWNkZmZkM2ZjNDU1NzBiMzQ2ZmY3ZTBlYmY4YTZjMTg3OGViOTFjMGIxM2RlOVVyf8A=: 00:21:12.499 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:12.499 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:12.499 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzE4MTRjNzMxNzBjNWJiNWIxNjZlNjdjMmQwMjM5NzDxif1I: 00:21:12.499 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTY1MDdkZGI5YTQ4Y2I1OGNkZWNkZmZkM2ZjNDU1NzBiMzQ2ZmY3ZTBlYmY4YTZjMTg3OGViOTFjMGIxM2RlOVVyf8A=: ]] 00:21:12.499 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTY1MDdkZGI5YTQ4Y2I1OGNkZWNkZmZkM2ZjNDU1NzBiMzQ2ZmY3ZTBlYmY4YTZjMTg3OGViOTFjMGIxM2RlOVVyf8A=: 00:21:12.499 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:21:12.499 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:12.499 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:12.499 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:12.499 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:12.499 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:12.499 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:12.499 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.499 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.499 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.499 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:12.499 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:12.499 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:12.499 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:12.499 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:12.499 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:12.499 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:12.499 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:12.499 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:12.499 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:12.499 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:12.499 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.757 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.757 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.016 nvme0n1 00:21:13.016 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.016 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:13.016 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:13.016 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.016 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.016 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.016 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.016 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:13.016 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.016 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.016 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.016 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:13.016 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:21:13.016 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:13.016 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:13.016 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:13.016 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:13.016 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGZjNGU0MjllMjUyYmMwZGU5NjQ5NmRiYjBhN2Y1ZTBkODg3ODE5ZmZlYTM4ZDM3mkr3TQ==: 00:21:13.016 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: 00:21:13.016 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:13.016 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:13.016 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGZjNGU0MjllMjUyYmMwZGU5NjQ5NmRiYjBhN2Y1ZTBkODg3ODE5ZmZlYTM4ZDM3mkr3TQ==: 00:21:13.016 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: ]] 00:21:13.016 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: 00:21:13.016 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:21:13.016 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:13.016 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:13.016 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:13.016 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:13.016 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:13.016 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:13.016 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.016 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.016 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.016 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:13.016 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:13.016 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:13.016 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:13.016 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:13.016 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:13.016 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:13.016 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:13.016 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:13.016 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:13.016 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:13.016 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.016 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.016 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.275 nvme0n1 00:21:13.275 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.275 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:13.275 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:13.275 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.275 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.275 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.275 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.275 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:13.275 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.275 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.275 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.275 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:13.275 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:21:13.275 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:13.275 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:13.275 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:13.275 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:13.275 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDM1MGQ0OGRhNmUyN2EyMDk4YTk1N2U3NDg0MDliNDib26FH: 00:21:13.275 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: 00:21:13.275 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:13.275 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:13.275 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDM1MGQ0OGRhNmUyN2EyMDk4YTk1N2U3NDg0MDliNDib26FH: 00:21:13.275 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: ]] 00:21:13.275 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: 00:21:13.275 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:21:13.275 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:13.275 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:13.275 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:13.275 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:13.275 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:13.275 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:13.275 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.275 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.275 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.275 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:13.275 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:13.275 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:13.275 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:13.275 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:13.275 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:13.275 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:13.275 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:13.275 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:13.275 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:13.275 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:13.275 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.275 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.275 10:49:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.842 nvme0n1 00:21:13.842 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.842 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:13.842 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:13.842 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.842 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.842 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.842 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.842 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:13.842 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.842 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.842 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.842 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:13.842 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:21:13.842 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:13.842 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:13.842 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:13.842 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:13.842 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODkxMTc1YzRjNDY5MzgyMTE4M2IyYjY2Mjk5ZGNjMGQyZDA5YTQ3ZjVjNTY3ODgwO5AESQ==: 00:21:13.842 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDNjZGI1Y2ZlN2U3Y2FkMDlhZTExMmFhYTgyNzVlMGPtQOsi: 00:21:13.842 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:13.842 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:13.842 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODkxMTc1YzRjNDY5MzgyMTE4M2IyYjY2Mjk5ZGNjMGQyZDA5YTQ3ZjVjNTY3ODgwO5AESQ==: 00:21:13.842 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDNjZGI1Y2ZlN2U3Y2FkMDlhZTExMmFhYTgyNzVlMGPtQOsi: ]] 00:21:13.842 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDNjZGI1Y2ZlN2U3Y2FkMDlhZTExMmFhYTgyNzVlMGPtQOsi: 00:21:13.842 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:21:13.842 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:13.842 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:13.842 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:13.842 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:13.842 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:13.842 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:13.842 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.842 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.842 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.842 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:13.842 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:13.842 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:13.842 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:13.842 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:13.842 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:13.842 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:13.842 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:13.842 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:13.842 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:13.842 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:13.842 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:13.842 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.842 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.101 nvme0n1 00:21:14.101 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.101 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:14.101 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:14.101 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.101 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.101 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.101 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.101 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:14.101 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.101 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.101 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.101 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:14.101 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:21:14.101 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:14.101 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:14.101 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:14.101 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:14.101 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UyYjI0NDE4NmRkOGU2ODQ4MDg0M2I2NjVkNTA2NTMxZDgyMWU2NWM3OTdjYjg3Njk4ZjllNDA2NDE0MmY4OZIguu4=: 00:21:14.101 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:14.101 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:14.101 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:14.101 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UyYjI0NDE4NmRkOGU2ODQ4MDg0M2I2NjVkNTA2NTMxZDgyMWU2NWM3OTdjYjg3Njk4ZjllNDA2NDE0MmY4OZIguu4=: 00:21:14.101 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:14.101 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:21:14.101 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:14.101 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:14.101 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:14.101 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:14.101 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:14.101 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:14.101 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.101 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.101 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.101 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:14.101 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:14.101 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:14.101 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:14.101 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:14.101 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:14.101 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:14.101 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:14.101 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:14.101 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:14.101 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:14.101 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:14.101 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.101 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.360 nvme0n1 00:21:14.360 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.360 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:14.360 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:14.360 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.360 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.360 10:49:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.360 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.360 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:14.360 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.360 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.618 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.618 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:14.618 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:14.618 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:21:14.618 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:14.618 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:14.618 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:14.618 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:14.618 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzE4MTRjNzMxNzBjNWJiNWIxNjZlNjdjMmQwMjM5NzDxif1I: 00:21:14.618 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTY1MDdkZGI5YTQ4Y2I1OGNkZWNkZmZkM2ZjNDU1NzBiMzQ2ZmY3ZTBlYmY4YTZjMTg3OGViOTFjMGIxM2RlOVVyf8A=: 00:21:14.618 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:14.618 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:14.618 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzE4MTRjNzMxNzBjNWJiNWIxNjZlNjdjMmQwMjM5NzDxif1I: 00:21:14.618 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTY1MDdkZGI5YTQ4Y2I1OGNkZWNkZmZkM2ZjNDU1NzBiMzQ2ZmY3ZTBlYmY4YTZjMTg3OGViOTFjMGIxM2RlOVVyf8A=: ]] 00:21:14.618 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTY1MDdkZGI5YTQ4Y2I1OGNkZWNkZmZkM2ZjNDU1NzBiMzQ2ZmY3ZTBlYmY4YTZjMTg3OGViOTFjMGIxM2RlOVVyf8A=: 00:21:14.618 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:21:14.618 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:14.618 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:14.618 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:14.618 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:14.618 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:14.618 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:14.618 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.618 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.618 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.618 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:14.618 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:14.618 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:14.618 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:14.618 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:14.618 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:14.618 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:14.618 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:14.618 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:14.618 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:14.618 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:14.618 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.618 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.618 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.876 nvme0n1 00:21:14.876 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.876 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:14.876 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:14.876 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.876 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.876 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.876 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.876 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:14.876 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.876 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.876 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.876 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:14.876 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:21:14.876 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:14.876 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:14.876 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:14.876 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:14.876 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGZjNGU0MjllMjUyYmMwZGU5NjQ5NmRiYjBhN2Y1ZTBkODg3ODE5ZmZlYTM4ZDM3mkr3TQ==: 00:21:14.876 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: 00:21:14.876 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:14.876 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:14.876 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGZjNGU0MjllMjUyYmMwZGU5NjQ5NmRiYjBhN2Y1ZTBkODg3ODE5ZmZlYTM4ZDM3mkr3TQ==: 00:21:14.876 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: ]] 00:21:14.876 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: 00:21:14.876 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:21:14.876 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:14.876 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:14.876 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:14.876 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:14.876 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:14.876 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:14.876 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.876 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.134 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.134 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:15.134 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:15.134 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:15.134 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:15.134 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:15.134 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:15.134 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:15.134 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:15.134 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:15.134 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:15.134 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:15.134 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.134 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.134 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.392 nvme0n1 00:21:15.393 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.393 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:15.393 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:15.393 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.393 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.393 10:49:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.393 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.393 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:15.393 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.393 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.393 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.393 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:15.393 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:21:15.393 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:15.393 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:15.393 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:15.393 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:15.393 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDM1MGQ0OGRhNmUyN2EyMDk4YTk1N2U3NDg0MDliNDib26FH: 00:21:15.393 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: 00:21:15.393 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:15.393 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:15.393 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDM1MGQ0OGRhNmUyN2EyMDk4YTk1N2U3NDg0MDliNDib26FH: 00:21:15.393 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: ]] 00:21:15.393 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: 00:21:15.393 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:21:15.393 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:15.393 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:15.393 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:15.393 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:15.393 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:15.393 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:15.393 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.393 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.393 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.393 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:15.393 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:15.393 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:15.393 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:15.393 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:15.393 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:15.393 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:15.393 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:15.393 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:15.393 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:15.393 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:15.393 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.393 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.393 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.959 nvme0n1 00:21:15.959 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.959 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:15.959 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:15.959 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.959 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.959 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.959 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.959 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:15.959 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.959 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.959 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.959 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:15.959 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:21:15.959 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:15.959 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:15.959 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:15.959 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:15.959 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODkxMTc1YzRjNDY5MzgyMTE4M2IyYjY2Mjk5ZGNjMGQyZDA5YTQ3ZjVjNTY3ODgwO5AESQ==: 00:21:15.959 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDNjZGI1Y2ZlN2U3Y2FkMDlhZTExMmFhYTgyNzVlMGPtQOsi: 00:21:15.959 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:15.959 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:15.959 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODkxMTc1YzRjNDY5MzgyMTE4M2IyYjY2Mjk5ZGNjMGQyZDA5YTQ3ZjVjNTY3ODgwO5AESQ==: 00:21:15.959 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDNjZGI1Y2ZlN2U3Y2FkMDlhZTExMmFhYTgyNzVlMGPtQOsi: ]] 00:21:15.959 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDNjZGI1Y2ZlN2U3Y2FkMDlhZTExMmFhYTgyNzVlMGPtQOsi: 00:21:15.959 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:21:15.959 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:15.959 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:15.959 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:15.959 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:15.959 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:15.959 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:15.959 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.959 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.959 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.959 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:15.959 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:15.959 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:15.959 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:15.959 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:15.959 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:15.959 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:15.959 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:15.959 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:15.959 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:15.959 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:15.959 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:15.959 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.959 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.525 nvme0n1 00:21:16.525 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.525 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:16.525 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:16.525 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.525 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.525 10:49:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.525 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.525 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:16.525 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.525 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.525 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.525 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:16.525 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:21:16.525 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:16.525 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:16.525 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:16.525 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:16.525 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UyYjI0NDE4NmRkOGU2ODQ4MDg0M2I2NjVkNTA2NTMxZDgyMWU2NWM3OTdjYjg3Njk4ZjllNDA2NDE0MmY4OZIguu4=: 00:21:16.525 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:16.525 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:16.525 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:16.525 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UyYjI0NDE4NmRkOGU2ODQ4MDg0M2I2NjVkNTA2NTMxZDgyMWU2NWM3OTdjYjg3Njk4ZjllNDA2NDE0MmY4OZIguu4=: 00:21:16.525 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:16.525 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:21:16.525 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:16.525 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:16.525 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:16.525 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:16.525 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:16.525 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:16.525 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.525 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.525 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.525 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:16.525 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:16.525 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:16.526 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:16.526 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:16.526 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:16.526 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:16.526 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:16.526 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:16.526 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:16.526 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:16.526 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:16.526 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.526 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.092 nvme0n1 00:21:17.092 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.092 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:17.092 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:17.092 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.092 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.092 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.092 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.092 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:17.092 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.092 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.092 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.092 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:17.092 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:17.092 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:21:17.092 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:17.092 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:17.092 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:17.092 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:17.092 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzE4MTRjNzMxNzBjNWJiNWIxNjZlNjdjMmQwMjM5NzDxif1I: 00:21:17.092 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTY1MDdkZGI5YTQ4Y2I1OGNkZWNkZmZkM2ZjNDU1NzBiMzQ2ZmY3ZTBlYmY4YTZjMTg3OGViOTFjMGIxM2RlOVVyf8A=: 00:21:17.092 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:17.092 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:17.092 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzE4MTRjNzMxNzBjNWJiNWIxNjZlNjdjMmQwMjM5NzDxif1I: 00:21:17.092 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTY1MDdkZGI5YTQ4Y2I1OGNkZWNkZmZkM2ZjNDU1NzBiMzQ2ZmY3ZTBlYmY4YTZjMTg3OGViOTFjMGIxM2RlOVVyf8A=: ]] 00:21:17.092 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTY1MDdkZGI5YTQ4Y2I1OGNkZWNkZmZkM2ZjNDU1NzBiMzQ2ZmY3ZTBlYmY4YTZjMTg3OGViOTFjMGIxM2RlOVVyf8A=: 00:21:17.092 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:21:17.092 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:17.092 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:17.092 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:17.092 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:17.092 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:17.092 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:17.092 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.092 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.092 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.092 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:17.092 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:17.092 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:17.092 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:17.092 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:17.092 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:17.092 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:17.092 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:17.092 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:17.092 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:17.092 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:17.092 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.092 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.092 10:49:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.658 nvme0n1 00:21:17.658 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.658 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:17.658 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:17.658 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.658 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.658 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.658 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.658 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:17.658 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.658 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.658 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.658 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:17.658 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:21:17.658 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:17.658 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:17.658 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:17.658 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:17.658 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGZjNGU0MjllMjUyYmMwZGU5NjQ5NmRiYjBhN2Y1ZTBkODg3ODE5ZmZlYTM4ZDM3mkr3TQ==: 00:21:17.658 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: 00:21:17.658 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:17.658 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:17.658 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGZjNGU0MjllMjUyYmMwZGU5NjQ5NmRiYjBhN2Y1ZTBkODg3ODE5ZmZlYTM4ZDM3mkr3TQ==: 00:21:17.658 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: ]] 00:21:17.658 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: 00:21:17.658 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:21:17.658 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:17.658 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:17.658 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:17.658 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:17.658 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:17.658 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:17.658 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.658 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.658 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.658 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:17.658 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:17.658 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:17.658 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:17.658 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:17.658 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:17.658 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:17.658 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:17.658 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:17.658 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:17.658 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:17.658 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.658 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.658 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.225 nvme0n1 00:21:18.225 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.225 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:18.225 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.225 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:18.225 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.225 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.225 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.225 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:18.225 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.225 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.225 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.225 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:18.225 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:21:18.225 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:18.225 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:18.225 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:18.225 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:18.225 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDM1MGQ0OGRhNmUyN2EyMDk4YTk1N2U3NDg0MDliNDib26FH: 00:21:18.225 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: 00:21:18.225 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:18.225 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:18.225 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDM1MGQ0OGRhNmUyN2EyMDk4YTk1N2U3NDg0MDliNDib26FH: 00:21:18.225 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: ]] 00:21:18.225 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: 00:21:18.225 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:21:18.225 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:18.225 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:18.225 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:18.225 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:18.225 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:18.225 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:18.225 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.225 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.483 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.483 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:18.483 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:18.483 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:18.483 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:18.483 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:18.483 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:18.483 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:18.483 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:18.483 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:18.483 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:18.483 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:18.483 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.483 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.483 10:49:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.050 nvme0n1 00:21:19.050 10:49:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.050 10:49:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:19.050 10:49:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:19.050 10:49:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.050 10:49:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.050 10:49:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.050 10:49:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.050 10:49:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:19.050 10:49:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.050 10:49:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.050 10:49:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.050 10:49:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:19.050 10:49:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:21:19.050 10:49:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:19.050 10:49:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:19.050 10:49:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:19.050 10:49:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:19.050 10:49:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODkxMTc1YzRjNDY5MzgyMTE4M2IyYjY2Mjk5ZGNjMGQyZDA5YTQ3ZjVjNTY3ODgwO5AESQ==: 00:21:19.050 10:49:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDNjZGI1Y2ZlN2U3Y2FkMDlhZTExMmFhYTgyNzVlMGPtQOsi: 00:21:19.050 10:49:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:19.050 10:49:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:19.050 10:49:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODkxMTc1YzRjNDY5MzgyMTE4M2IyYjY2Mjk5ZGNjMGQyZDA5YTQ3ZjVjNTY3ODgwO5AESQ==: 00:21:19.050 10:49:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDNjZGI1Y2ZlN2U3Y2FkMDlhZTExMmFhYTgyNzVlMGPtQOsi: ]] 00:21:19.050 10:49:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDNjZGI1Y2ZlN2U3Y2FkMDlhZTExMmFhYTgyNzVlMGPtQOsi: 00:21:19.050 10:49:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:21:19.050 10:49:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:19.050 10:49:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:19.050 10:49:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:19.050 10:49:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:19.050 10:49:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:19.050 10:49:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:19.050 10:49:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.050 10:49:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.050 10:49:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.050 10:49:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:19.050 10:49:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:19.050 10:49:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:19.050 10:49:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:19.050 10:49:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:19.050 10:49:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:19.050 10:49:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:19.050 10:49:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:19.050 10:49:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:19.050 10:49:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:19.050 10:49:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:19.050 10:49:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:19.050 10:49:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.050 10:49:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.616 nvme0n1 00:21:19.616 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.616 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:19.616 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:19.616 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.616 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.616 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.616 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.616 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:19.616 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.616 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.616 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.616 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:19.616 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:21:19.616 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:19.616 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:19.616 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:19.616 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:19.616 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2UyYjI0NDE4NmRkOGU2ODQ4MDg0M2I2NjVkNTA2NTMxZDgyMWU2NWM3OTdjYjg3Njk4ZjllNDA2NDE0MmY4OZIguu4=: 00:21:19.616 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:19.616 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:19.616 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:19.616 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2UyYjI0NDE4NmRkOGU2ODQ4MDg0M2I2NjVkNTA2NTMxZDgyMWU2NWM3OTdjYjg3Njk4ZjllNDA2NDE0MmY4OZIguu4=: 00:21:19.616 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:19.616 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:21:19.616 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:19.874 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:19.874 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:19.874 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:19.874 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:19.874 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:19.874 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.874 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.874 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.874 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:19.874 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:19.874 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:19.874 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:19.874 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:19.874 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:19.874 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:19.874 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:19.874 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:19.874 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:19.874 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:19.874 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:19.874 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.874 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.440 nvme0n1 00:21:20.440 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.440 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:20.440 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.440 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:20.440 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.440 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.440 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.440 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:20.440 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.440 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.440 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.440 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:20.440 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:20.440 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:20.440 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:20.440 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:20.440 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGZjNGU0MjllMjUyYmMwZGU5NjQ5NmRiYjBhN2Y1ZTBkODg3ODE5ZmZlYTM4ZDM3mkr3TQ==: 00:21:20.440 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: 00:21:20.440 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:20.440 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:20.440 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGZjNGU0MjllMjUyYmMwZGU5NjQ5NmRiYjBhN2Y1ZTBkODg3ODE5ZmZlYTM4ZDM3mkr3TQ==: 00:21:20.440 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: ]] 00:21:20.440 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: 00:21:20.440 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:20.440 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.440 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.440 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.440 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:21:20.440 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:20.440 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:20.440 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:20.440 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:20.440 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:20.440 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:20.440 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:20.440 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:20.440 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:20.440 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:20.440 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:20.440 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:21:20.440 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:20.440 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:20.440 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:20.440 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:20.440 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:20.440 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:20.440 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.440 10:49:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.440 request: 00:21:20.440 { 00:21:20.440 "name": "nvme0", 00:21:20.440 "trtype": "rdma", 00:21:20.440 "traddr": "192.168.100.8", 00:21:20.440 "adrfam": "ipv4", 00:21:20.440 "trsvcid": "4420", 00:21:20.440 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:20.440 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:20.440 "prchk_reftag": false, 00:21:20.440 "prchk_guard": false, 00:21:20.440 "hdgst": false, 00:21:20.440 "ddgst": false, 00:21:20.440 "allow_unrecognized_csi": false, 00:21:20.440 "method": "bdev_nvme_attach_controller", 00:21:20.440 "req_id": 1 00:21:20.440 } 00:21:20.440 Got JSON-RPC error response 00:21:20.440 response: 00:21:20.440 { 00:21:20.440 "code": -5, 00:21:20.440 "message": "Input/output error" 00:21:20.440 } 00:21:20.440 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:20.440 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:21:20.440 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:20.440 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:20.440 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:20.440 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:21:20.440 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.440 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:21:20.440 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.440 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.701 request: 00:21:20.701 { 00:21:20.701 "name": "nvme0", 00:21:20.701 "trtype": "rdma", 00:21:20.701 "traddr": "192.168.100.8", 00:21:20.701 "adrfam": "ipv4", 00:21:20.701 "trsvcid": "4420", 00:21:20.701 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:20.701 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:20.701 "prchk_reftag": false, 00:21:20.701 "prchk_guard": false, 00:21:20.701 "hdgst": false, 00:21:20.701 "ddgst": false, 00:21:20.701 "dhchap_key": "key2", 00:21:20.701 "allow_unrecognized_csi": false, 00:21:20.701 "method": "bdev_nvme_attach_controller", 00:21:20.701 "req_id": 1 00:21:20.701 } 00:21:20.701 Got JSON-RPC error response 00:21:20.701 response: 00:21:20.701 { 00:21:20.701 "code": -5, 00:21:20.701 "message": "Input/output error" 00:21:20.701 } 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.701 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.959 request: 00:21:20.959 { 00:21:20.959 "name": "nvme0", 00:21:20.959 "trtype": "rdma", 00:21:20.959 "traddr": "192.168.100.8", 00:21:20.959 "adrfam": "ipv4", 00:21:20.959 "trsvcid": "4420", 00:21:20.959 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:20.959 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:20.959 "prchk_reftag": false, 00:21:20.959 "prchk_guard": false, 00:21:20.959 "hdgst": false, 00:21:20.959 "ddgst": false, 00:21:20.959 "dhchap_key": "key1", 00:21:20.959 "dhchap_ctrlr_key": "ckey2", 00:21:20.959 "allow_unrecognized_csi": false, 00:21:20.959 "method": "bdev_nvme_attach_controller", 00:21:20.959 "req_id": 1 00:21:20.959 } 00:21:20.959 Got JSON-RPC error response 00:21:20.959 response: 00:21:20.959 { 00:21:20.959 "code": -5, 00:21:20.959 "message": "Input/output error" 00:21:20.959 } 00:21:20.959 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:20.959 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:21:20.959 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:20.959 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:20.959 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:20.959 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:21:20.959 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:20.959 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:20.960 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:20.960 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:20.960 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:20.960 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:20.960 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:20.960 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:20.960 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:20.960 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:20.960 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:20.960 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.960 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.960 nvme0n1 00:21:20.960 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.960 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:21:20.960 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:20.960 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:20.960 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:20.960 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:20.960 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDM1MGQ0OGRhNmUyN2EyMDk4YTk1N2U3NDg0MDliNDib26FH: 00:21:20.960 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: 00:21:20.960 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:20.960 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:20.960 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDM1MGQ0OGRhNmUyN2EyMDk4YTk1N2U3NDg0MDliNDib26FH: 00:21:20.960 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: ]] 00:21:20.960 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: 00:21:20.960 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.960 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.960 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.218 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.218 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:21:21.218 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.218 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:21:21.218 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.218 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.218 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.218 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:21.218 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:21:21.218 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:21.218 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:21.218 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:21.218 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:21.218 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:21.218 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:21.218 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.218 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.218 request: 00:21:21.218 { 00:21:21.218 "name": "nvme0", 00:21:21.218 "dhchap_key": "key1", 00:21:21.218 "dhchap_ctrlr_key": "ckey2", 00:21:21.218 "method": "bdev_nvme_set_keys", 00:21:21.218 "req_id": 1 00:21:21.218 } 00:21:21.218 Got JSON-RPC error response 00:21:21.218 response: 00:21:21.218 { 00:21:21.218 "code": -13, 00:21:21.218 "message": "Permission denied" 00:21:21.218 } 00:21:21.218 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:21.218 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:21:21.218 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:21.218 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:21.218 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:21.218 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:21:21.218 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:21:21.218 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.218 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.218 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.218 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:21:21.218 10:49:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:21:22.590 10:49:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:21:22.590 10:49:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:21:22.590 10:49:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.590 10:49:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.590 10:49:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.590 10:49:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:21:22.590 10:49:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:21:23.525 10:49:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:21:23.525 10:49:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:21:23.525 10:49:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.525 10:49:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.525 10:49:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.525 10:49:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:21:23.525 10:49:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:23.525 10:49:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:23.525 10:49:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:23.525 10:49:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:23.525 10:49:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:23.525 10:49:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGZjNGU0MjllMjUyYmMwZGU5NjQ5NmRiYjBhN2Y1ZTBkODg3ODE5ZmZlYTM4ZDM3mkr3TQ==: 00:21:23.525 10:49:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: 00:21:23.525 10:49:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:23.525 10:49:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:23.525 10:49:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGZjNGU0MjllMjUyYmMwZGU5NjQ5NmRiYjBhN2Y1ZTBkODg3ODE5ZmZlYTM4ZDM3mkr3TQ==: 00:21:23.525 10:49:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: ]] 00:21:23.525 10:49:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzFlMGYwMjhiY2E0NTFiZjhhOWRlMDA2OGU5MmE0ZGFmNThmMzQ5Yjg5NGQ5M2U4WOTYdA==: 00:21:23.525 10:49:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:21:23.525 10:49:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:23.525 10:49:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:23.525 10:49:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:23.525 10:49:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:23.525 10:49:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:23.525 10:49:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:21:23.525 10:49:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:23.525 10:49:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:21:23.525 10:49:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:21:23.525 10:49:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:21:23.525 10:49:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:23.525 10:49:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.525 10:49:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.525 nvme0n1 00:21:23.525 10:49:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.525 10:49:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:21:23.525 10:49:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:23.525 10:49:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:23.525 10:49:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:23.525 10:49:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:23.525 10:49:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDM1MGQ0OGRhNmUyN2EyMDk4YTk1N2U3NDg0MDliNDib26FH: 00:21:23.525 10:49:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: 00:21:23.525 10:49:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:23.525 10:49:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:23.525 10:49:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDM1MGQ0OGRhNmUyN2EyMDk4YTk1N2U3NDg0MDliNDib26FH: 00:21:23.525 10:49:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: ]] 00:21:23.525 10:49:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2NiM2ZiMGU3NjZiNDk4Zjg4NWQ2MzZiYmZkYTY1OGYPrNhV: 00:21:23.525 10:49:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:21:23.525 10:49:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:21:23.525 10:49:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:21:23.525 10:49:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:23.525 10:49:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:23.525 10:49:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:23.525 10:49:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:23.525 10:49:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:21:23.525 10:49:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.526 10:49:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.783 request: 00:21:23.784 { 00:21:23.784 "name": "nvme0", 00:21:23.784 "dhchap_key": "key2", 00:21:23.784 "dhchap_ctrlr_key": "ckey1", 00:21:23.784 "method": "bdev_nvme_set_keys", 00:21:23.784 "req_id": 1 00:21:23.784 } 00:21:23.784 Got JSON-RPC error response 00:21:23.784 response: 00:21:23.784 { 00:21:23.784 "code": -13, 00:21:23.784 "message": "Permission denied" 00:21:23.784 } 00:21:23.784 10:49:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:23.784 10:49:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:21:23.784 10:49:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:23.784 10:49:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:23.784 10:49:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:23.784 10:49:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:21:23.784 10:49:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:21:23.784 10:49:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.784 10:49:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.784 10:49:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.784 10:49:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:21:23.784 10:49:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:21:24.717 10:49:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:21:24.717 10:49:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:21:24.717 10:49:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.717 10:49:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.717 10:49:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.717 10:49:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:21:24.717 10:49:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:21:25.650 10:49:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:21:25.650 10:49:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:21:25.650 10:49:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.650 10:49:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.650 10:49:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.909 10:49:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:21:25.909 10:49:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:21:25.909 10:49:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:21:25.909 10:49:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:21:25.909 10:49:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:25.909 10:49:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:21:25.909 10:49:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:21:25.909 10:49:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:21:25.909 10:49:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:21:25.909 10:49:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:25.909 10:49:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:21:25.909 rmmod nvme_rdma 00:21:25.909 rmmod nvme_fabrics 00:21:25.909 10:49:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:25.909 10:49:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:21:25.909 10:49:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:21:25.909 10:49:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 3859774 ']' 00:21:25.909 10:49:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 3859774 00:21:25.909 10:49:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 3859774 ']' 00:21:25.909 10:49:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 3859774 00:21:25.909 10:49:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:21:25.909 10:49:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:25.909 10:49:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3859774 00:21:25.909 10:49:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:25.909 10:49:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:25.909 10:49:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3859774' 00:21:25.909 killing process with pid 3859774 00:21:25.909 10:49:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 3859774 00:21:25.909 10:49:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 3859774 00:21:26.168 10:49:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:26.168 10:49:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:21:26.168 10:49:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:21:26.168 10:49:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:21:26.168 10:49:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:21:26.168 10:49:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:21:26.168 10:49:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:21:26.168 10:49:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:26.168 10:49:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:21:26.168 10:49:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:26.168 10:49:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:26.168 10:49:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:21:26.168 10:49:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_rdma nvmet 00:21:26.168 10:49:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:21:29.471 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:21:29.471 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:21:29.471 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:21:29.471 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:21:29.471 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:21:29.471 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:21:29.471 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:21:29.471 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:21:29.471 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:21:29.471 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:21:29.471 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:21:29.471 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:21:29.471 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:21:29.471 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:21:29.471 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:21:29.471 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:21:31.373 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:21:31.631 10:49:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.FS9 /tmp/spdk.key-null.ouR /tmp/spdk.key-sha256.Gjg /tmp/spdk.key-sha384.gTN /tmp/spdk.key-sha512.y3l /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:21:31.631 10:49:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:21:34.912 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:21:34.912 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:21:34.912 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:21:34.912 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:21:34.912 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:21:34.912 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:21:34.912 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:21:34.912 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:21:34.912 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:21:34.912 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:21:34.912 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:21:34.912 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:21:34.912 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:21:34.912 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:21:34.912 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:21:34.912 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:21:34.912 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:21:34.912 00:21:34.912 real 1m0.938s 00:21:34.912 user 0m54.822s 00:21:34.912 sys 0m15.266s 00:21:34.912 10:50:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:34.912 10:50:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.912 ************************************ 00:21:34.912 END TEST nvmf_auth_host 00:21:34.912 ************************************ 00:21:34.912 10:50:02 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ rdma == \t\c\p ]] 00:21:34.912 10:50:02 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:21:34.912 10:50:02 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:21:34.912 10:50:02 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:21:34.912 10:50:02 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:21:34.912 10:50:02 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:34.912 10:50:02 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:34.912 10:50:02 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.170 ************************************ 00:21:35.170 START TEST nvmf_bdevperf 00:21:35.170 ************************************ 00:21:35.170 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:21:35.170 * Looking for test storage... 00:21:35.170 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:35.170 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:35.170 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:21:35.170 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:35.170 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:35.170 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:35.170 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:35.170 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:35.170 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:21:35.170 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:21:35.170 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:21:35.170 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:21:35.170 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:21:35.170 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:21:35.170 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:21:35.170 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:35.170 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:21:35.170 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:21:35.170 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:35.170 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:35.170 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:21:35.170 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:21:35.170 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:35.170 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:21:35.170 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:35.170 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:21:35.170 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:21:35.170 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:35.170 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:21:35.170 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:35.170 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:35.170 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:35.170 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:21:35.170 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:35.170 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:35.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.170 --rc genhtml_branch_coverage=1 00:21:35.170 --rc genhtml_function_coverage=1 00:21:35.170 --rc genhtml_legend=1 00:21:35.170 --rc geninfo_all_blocks=1 00:21:35.170 --rc geninfo_unexecuted_blocks=1 00:21:35.170 00:21:35.170 ' 00:21:35.170 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:35.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.170 --rc genhtml_branch_coverage=1 00:21:35.170 --rc genhtml_function_coverage=1 00:21:35.170 --rc genhtml_legend=1 00:21:35.170 --rc geninfo_all_blocks=1 00:21:35.170 --rc geninfo_unexecuted_blocks=1 00:21:35.170 00:21:35.170 ' 00:21:35.170 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:35.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.170 --rc genhtml_branch_coverage=1 00:21:35.170 --rc genhtml_function_coverage=1 00:21:35.170 --rc genhtml_legend=1 00:21:35.170 --rc geninfo_all_blocks=1 00:21:35.170 --rc geninfo_unexecuted_blocks=1 00:21:35.170 00:21:35.170 ' 00:21:35.170 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:35.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.170 --rc genhtml_branch_coverage=1 00:21:35.171 --rc genhtml_function_coverage=1 00:21:35.171 --rc genhtml_legend=1 00:21:35.171 --rc geninfo_all_blocks=1 00:21:35.171 --rc geninfo_unexecuted_blocks=1 00:21:35.171 00:21:35.171 ' 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:35.171 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:35.171 10:50:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:43.281 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:43.281 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:43.281 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:43.281 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:43.281 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:43.281 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:43.281 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:43.281 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:21:43.281 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:43.281 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:21:43.281 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:21:43.281 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:21:43.281 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:21:43.281 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:21:43.281 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:43.281 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:43.281 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:43.281 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:43.281 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:43.281 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:43.281 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:43.281 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:43.281 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:43.281 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:43.281 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:43.281 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:43.281 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:43.281 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:43.281 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:21:43.281 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:21:43.281 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:21:43.281 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:21:43.281 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:21:43.282 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:21:43.282 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:21:43.282 Found net devices under 0000:d9:00.0: mlx_0_0 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:21:43.282 Found net devices under 0000:d9:00.1: mlx_0_1 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # rdma_device_init 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # uname 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@67 -- # modprobe ib_core 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@530 -- # allocate_nic_ips 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:21:43.282 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:43.282 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:21:43.282 altname enp217s0f0np0 00:21:43.282 altname ens818f0np0 00:21:43.282 inet 192.168.100.8/24 scope global mlx_0_0 00:21:43.282 valid_lft forever preferred_lft forever 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:21:43.282 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:43.282 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:21:43.282 altname enp217s0f1np1 00:21:43.282 altname ens818f1np1 00:21:43.282 inet 192.168.100.9/24 scope global mlx_0_1 00:21:43.282 valid_lft forever preferred_lft forever 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:43.282 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:43.283 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:43.283 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:43.283 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:21:43.283 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:43.283 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:43.283 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:43.283 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:43.283 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:43.283 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:43.283 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:21:43.283 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:43.283 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:21:43.283 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:43.283 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:43.283 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:43.283 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:43.283 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:43.283 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:21:43.283 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:43.283 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:43.283 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:43.283 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:43.283 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:21:43.283 192.168.100.9' 00:21:43.283 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:21:43.283 192.168.100.9' 00:21:43.283 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # head -n 1 00:21:43.283 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:43.283 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:21:43.283 192.168.100.9' 00:21:43.283 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # head -n 1 00:21:43.283 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # tail -n +2 00:21:43.283 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:43.283 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:21:43.283 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:43.283 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:21:43.283 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:21:43.283 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:21:43.283 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:21:43.283 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:21:43.283 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:43.283 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:43.283 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:43.283 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3874588 00:21:43.283 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:43.283 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3874588 00:21:43.283 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 3874588 ']' 00:21:43.283 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:43.283 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:43.283 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:43.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:43.283 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:43.283 10:50:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:43.283 [2024-11-07 10:50:09.878319] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:21:43.283 [2024-11-07 10:50:09.878368] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:43.283 [2024-11-07 10:50:09.954167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:43.283 [2024-11-07 10:50:09.991779] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:43.283 [2024-11-07 10:50:09.991817] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:43.283 [2024-11-07 10:50:09.991826] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:43.283 [2024-11-07 10:50:09.991834] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:43.283 [2024-11-07 10:50:09.991841] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:43.283 [2024-11-07 10:50:09.993438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:43.283 [2024-11-07 10:50:09.993500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:43.283 [2024-11-07 10:50:09.993502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:43.283 10:50:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:43.283 10:50:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:21:43.283 10:50:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:43.283 10:50:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:43.283 10:50:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:43.283 10:50:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:43.283 10:50:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:43.283 10:50:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.283 10:50:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:43.283 [2024-11-07 10:50:10.171816] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ffd570/0x2001a60) succeed. 00:21:43.283 [2024-11-07 10:50:10.180830] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ffeb60/0x2043100) succeed. 00:21:43.283 10:50:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.283 10:50:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:43.283 10:50:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.283 10:50:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:43.283 Malloc0 00:21:43.283 10:50:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.283 10:50:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:43.283 10:50:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.283 10:50:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:43.283 10:50:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.283 10:50:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:43.283 10:50:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.283 10:50:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:43.283 10:50:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.283 10:50:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:43.283 10:50:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.283 10:50:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:43.283 [2024-11-07 10:50:10.325739] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:43.283 10:50:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.283 10:50:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:21:43.283 10:50:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:21:43.283 10:50:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:21:43.283 10:50:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:21:43.283 10:50:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:43.283 10:50:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:43.283 { 00:21:43.283 "params": { 00:21:43.283 "name": "Nvme$subsystem", 00:21:43.283 "trtype": "$TEST_TRANSPORT", 00:21:43.283 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.283 "adrfam": "ipv4", 00:21:43.283 "trsvcid": "$NVMF_PORT", 00:21:43.283 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.283 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.283 "hdgst": ${hdgst:-false}, 00:21:43.283 "ddgst": ${ddgst:-false} 00:21:43.283 }, 00:21:43.283 "method": "bdev_nvme_attach_controller" 00:21:43.283 } 00:21:43.283 EOF 00:21:43.283 )") 00:21:43.283 10:50:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:21:43.283 10:50:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:21:43.283 10:50:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:21:43.283 10:50:10 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:43.283 "params": { 00:21:43.283 "name": "Nvme1", 00:21:43.283 "trtype": "rdma", 00:21:43.283 "traddr": "192.168.100.8", 00:21:43.283 "adrfam": "ipv4", 00:21:43.283 "trsvcid": "4420", 00:21:43.283 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.283 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:43.283 "hdgst": false, 00:21:43.284 "ddgst": false 00:21:43.284 }, 00:21:43.284 "method": "bdev_nvme_attach_controller" 00:21:43.284 }' 00:21:43.284 [2024-11-07 10:50:10.377582] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:21:43.284 [2024-11-07 10:50:10.377631] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3874721 ] 00:21:43.284 [2024-11-07 10:50:10.471610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.284 [2024-11-07 10:50:10.511088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:43.284 Running I/O for 1 seconds... 00:21:44.216 18432.00 IOPS, 72.00 MiB/s 00:21:44.216 Latency(us) 00:21:44.216 [2024-11-07T09:50:11.887Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.216 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:44.216 Verification LBA range: start 0x0 length 0x4000 00:21:44.216 Nvme1n1 : 1.01 18472.27 72.16 0.00 0.00 6887.93 1336.93 10800.33 00:21:44.216 [2024-11-07T09:50:11.887Z] =================================================================================================================== 00:21:44.216 [2024-11-07T09:50:11.887Z] Total : 18472.27 72.16 0.00 0.00 6887.93 1336.93 10800.33 00:21:44.216 10:50:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3874954 00:21:44.216 10:50:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:21:44.216 10:50:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:21:44.216 10:50:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:21:44.216 10:50:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:21:44.216 10:50:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:21:44.216 10:50:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:44.216 10:50:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:44.216 { 00:21:44.216 "params": { 00:21:44.216 "name": "Nvme$subsystem", 00:21:44.216 "trtype": "$TEST_TRANSPORT", 00:21:44.216 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:44.216 "adrfam": "ipv4", 00:21:44.216 "trsvcid": "$NVMF_PORT", 00:21:44.216 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:44.216 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:44.216 "hdgst": ${hdgst:-false}, 00:21:44.216 "ddgst": ${ddgst:-false} 00:21:44.216 }, 00:21:44.216 "method": "bdev_nvme_attach_controller" 00:21:44.216 } 00:21:44.216 EOF 00:21:44.216 )") 00:21:44.216 10:50:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:21:44.474 10:50:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:21:44.474 10:50:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:21:44.474 10:50:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:44.474 "params": { 00:21:44.474 "name": "Nvme1", 00:21:44.474 "trtype": "rdma", 00:21:44.474 "traddr": "192.168.100.8", 00:21:44.474 "adrfam": "ipv4", 00:21:44.474 "trsvcid": "4420", 00:21:44.474 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:44.474 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:44.474 "hdgst": false, 00:21:44.474 "ddgst": false 00:21:44.474 }, 00:21:44.474 "method": "bdev_nvme_attach_controller" 00:21:44.474 }' 00:21:44.474 [2024-11-07 10:50:11.921805] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:21:44.474 [2024-11-07 10:50:11.921857] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3874954 ] 00:21:44.474 [2024-11-07 10:50:11.999605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.474 [2024-11-07 10:50:12.035560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:44.731 Running I/O for 15 seconds... 00:21:46.613 18403.00 IOPS, 71.89 MiB/s [2024-11-07T09:50:15.226Z] 18432.00 IOPS, 72.00 MiB/s [2024-11-07T09:50:15.226Z] 10:50:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3874588 00:21:47.555 10:50:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:21:48.491 16373.67 IOPS, 63.96 MiB/s [2024-11-07T09:50:16.162Z] [2024-11-07 10:50:15.911731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:130824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433c000 len:0x1000 key:0x181600 00:21:48.491 [2024-11-07 10:50:15.911769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.491 [2024-11-07 10:50:15.911787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:130832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433a000 len:0x1000 key:0x181600 00:21:48.491 [2024-11-07 10:50:15.911797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.491 [2024-11-07 10:50:15.911808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:130840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004338000 len:0x1000 key:0x181600 00:21:48.491 [2024-11-07 10:50:15.911817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.491 [2024-11-07 10:50:15.911827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:130848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004336000 len:0x1000 key:0x181600 00:21:48.491 [2024-11-07 10:50:15.911840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.491 [2024-11-07 10:50:15.911850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:130856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x181600 00:21:48.491 [2024-11-07 10:50:15.911858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.491 [2024-11-07 10:50:15.911868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:130864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004332000 len:0x1000 key:0x181600 00:21:48.491 [2024-11-07 10:50:15.911876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.491 [2024-11-07 10:50:15.911886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:130872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x181600 00:21:48.491 [2024-11-07 10:50:15.911894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.491 [2024-11-07 10:50:15.911905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:130880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432e000 len:0x1000 key:0x181600 00:21:48.491 [2024-11-07 10:50:15.911914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.491 [2024-11-07 10:50:15.911924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:130888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432c000 len:0x1000 key:0x181600 00:21:48.491 [2024-11-07 10:50:15.911932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.491 [2024-11-07 10:50:15.911943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:130896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432a000 len:0x1000 key:0x181600 00:21:48.491 [2024-11-07 10:50:15.911952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.491 [2024-11-07 10:50:15.911962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:130904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x181600 00:21:48.491 [2024-11-07 10:50:15.911971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.491 [2024-11-07 10:50:15.911981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:130912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004326000 len:0x1000 key:0x181600 00:21:48.491 [2024-11-07 10:50:15.911989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.491 [2024-11-07 10:50:15.912000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:130920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004324000 len:0x1000 key:0x181600 00:21:48.491 [2024-11-07 10:50:15.912009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.491 [2024-11-07 10:50:15.912019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:130928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004322000 len:0x1000 key:0x181600 00:21:48.491 [2024-11-07 10:50:15.912028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.492 [2024-11-07 10:50:15.912038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:130936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004320000 len:0x1000 key:0x181600 00:21:48.492 [2024-11-07 10:50:15.912047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.492 [2024-11-07 10:50:15.912061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:130944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431e000 len:0x1000 key:0x181600 00:21:48.492 [2024-11-07 10:50:15.912070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.492 [2024-11-07 10:50:15.912080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:130952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431c000 len:0x1000 key:0x181600 00:21:48.492 [2024-11-07 10:50:15.912092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.492 [2024-11-07 10:50:15.912103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:130960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431a000 len:0x1000 key:0x181600 00:21:48.492 [2024-11-07 10:50:15.912112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.492 [2024-11-07 10:50:15.912123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:130968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004318000 len:0x1000 key:0x181600 00:21:48.492 [2024-11-07 10:50:15.912132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.492 [2024-11-07 10:50:15.912143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:130976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004316000 len:0x1000 key:0x181600 00:21:48.492 [2024-11-07 10:50:15.912151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.492 [2024-11-07 10:50:15.912162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:130984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004314000 len:0x1000 key:0x181600 00:21:48.492 [2024-11-07 10:50:15.912171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.492 [2024-11-07 10:50:15.912181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:130992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004312000 len:0x1000 key:0x181600 00:21:48.492 [2024-11-07 10:50:15.912190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.492 [2024-11-07 10:50:15.912200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:131000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004310000 len:0x1000 key:0x181600 00:21:48.492 [2024-11-07 10:50:15.912209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.492 [2024-11-07 10:50:15.912220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:131008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430e000 len:0x1000 key:0x181600 00:21:48.492 [2024-11-07 10:50:15.912229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.492 [2024-11-07 10:50:15.912239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:131016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430c000 len:0x1000 key:0x181600 00:21:48.492 [2024-11-07 10:50:15.912248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.492 [2024-11-07 10:50:15.912258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:131024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430a000 len:0x1000 key:0x181600 00:21:48.492 [2024-11-07 10:50:15.912266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.492 [2024-11-07 10:50:15.912276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:131032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004308000 len:0x1000 key:0x181600 00:21:48.492 [2024-11-07 10:50:15.912287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.492 [2024-11-07 10:50:15.912297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:131040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004306000 len:0x1000 key:0x181600 00:21:48.492 [2024-11-07 10:50:15.912305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.492 [2024-11-07 10:50:15.912315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:131048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004304000 len:0x1000 key:0x181600 00:21:48.492 [2024-11-07 10:50:15.912323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.492 [2024-11-07 10:50:15.912333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:131056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004302000 len:0x1000 key:0x181600 00:21:48.492 [2024-11-07 10:50:15.912341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.492 [2024-11-07 10:50:15.912351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:131064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004300000 len:0x1000 key:0x181600 00:21:48.492 [2024-11-07 10:50:15.912359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.492 [2024-11-07 10:50:15.912370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:0 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.492 [2024-11-07 10:50:15.912378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.492 [2024-11-07 10:50:15.912388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.492 [2024-11-07 10:50:15.912396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.492 [2024-11-07 10:50:15.912406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.492 [2024-11-07 10:50:15.912414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.492 [2024-11-07 10:50:15.912424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.492 [2024-11-07 10:50:15.912432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.492 [2024-11-07 10:50:15.912441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.492 [2024-11-07 10:50:15.912449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.492 [2024-11-07 10:50:15.912459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:40 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.492 [2024-11-07 10:50:15.912467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.492 [2024-11-07 10:50:15.912477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:48 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.492 [2024-11-07 10:50:15.912487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.492 [2024-11-07 10:50:15.912496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:56 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.492 [2024-11-07 10:50:15.912506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.492 [2024-11-07 10:50:15.912520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:64 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.492 [2024-11-07 10:50:15.912529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.492 [2024-11-07 10:50:15.912538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:72 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.492 [2024-11-07 10:50:15.912564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.492 [2024-11-07 10:50:15.912574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.492 [2024-11-07 10:50:15.912583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.492 [2024-11-07 10:50:15.912593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:88 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.492 [2024-11-07 10:50:15.912602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.492 [2024-11-07 10:50:15.912612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.492 [2024-11-07 10:50:15.912621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.492 [2024-11-07 10:50:15.912631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.492 [2024-11-07 10:50:15.912640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.492 [2024-11-07 10:50:15.912650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.492 [2024-11-07 10:50:15.912658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.492 [2024-11-07 10:50:15.912668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.492 [2024-11-07 10:50:15.912677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.492 [2024-11-07 10:50:15.912687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.492 [2024-11-07 10:50:15.912696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.493 [2024-11-07 10:50:15.912706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.493 [2024-11-07 10:50:15.912715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.493 [2024-11-07 10:50:15.912725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.493 [2024-11-07 10:50:15.912735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.493 [2024-11-07 10:50:15.912745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.493 [2024-11-07 10:50:15.912754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.493 [2024-11-07 10:50:15.912765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.493 [2024-11-07 10:50:15.912774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.493 [2024-11-07 10:50:15.912784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.493 [2024-11-07 10:50:15.912793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.493 [2024-11-07 10:50:15.912802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.493 [2024-11-07 10:50:15.912811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.493 [2024-11-07 10:50:15.912822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.493 [2024-11-07 10:50:15.912831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.493 [2024-11-07 10:50:15.912840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.493 [2024-11-07 10:50:15.912849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.493 [2024-11-07 10:50:15.912859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.493 [2024-11-07 10:50:15.912868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.493 [2024-11-07 10:50:15.912878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.493 [2024-11-07 10:50:15.912887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.493 [2024-11-07 10:50:15.912897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.493 [2024-11-07 10:50:15.912905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.493 [2024-11-07 10:50:15.912915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.493 [2024-11-07 10:50:15.912924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.493 [2024-11-07 10:50:15.912934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.493 [2024-11-07 10:50:15.912943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.493 [2024-11-07 10:50:15.912953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.493 [2024-11-07 10:50:15.912961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.493 [2024-11-07 10:50:15.912971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.493 [2024-11-07 10:50:15.912980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.493 [2024-11-07 10:50:15.912993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.493 [2024-11-07 10:50:15.913002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.493 [2024-11-07 10:50:15.913012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.493 [2024-11-07 10:50:15.913021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.493 [2024-11-07 10:50:15.913031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.493 [2024-11-07 10:50:15.913039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.493 [2024-11-07 10:50:15.913050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.493 [2024-11-07 10:50:15.913059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.493 [2024-11-07 10:50:15.913069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.493 [2024-11-07 10:50:15.913078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.493 [2024-11-07 10:50:15.913088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.493 [2024-11-07 10:50:15.913096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.493 [2024-11-07 10:50:15.913106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.493 [2024-11-07 10:50:15.913116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.493 [2024-11-07 10:50:15.913126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.493 [2024-11-07 10:50:15.913135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.493 [2024-11-07 10:50:15.913145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.493 [2024-11-07 10:50:15.913154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.493 [2024-11-07 10:50:15.913164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.493 [2024-11-07 10:50:15.913173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.493 [2024-11-07 10:50:15.913183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.493 [2024-11-07 10:50:15.913192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.493 [2024-11-07 10:50:15.913202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.493 [2024-11-07 10:50:15.913210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.493 [2024-11-07 10:50:15.913220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.493 [2024-11-07 10:50:15.913231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.493 [2024-11-07 10:50:15.913241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.493 [2024-11-07 10:50:15.913250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.493 [2024-11-07 10:50:15.913260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.493 [2024-11-07 10:50:15.913268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.493 [2024-11-07 10:50:15.913278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.493 [2024-11-07 10:50:15.913287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.493 [2024-11-07 10:50:15.913297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.493 [2024-11-07 10:50:15.913306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.493 [2024-11-07 10:50:15.913318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.493 [2024-11-07 10:50:15.913326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.493 [2024-11-07 10:50:15.913336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.493 [2024-11-07 10:50:15.913345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.493 [2024-11-07 10:50:15.913355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.493 [2024-11-07 10:50:15.913364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.493 [2024-11-07 10:50:15.913374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.493 [2024-11-07 10:50:15.913382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.493 [2024-11-07 10:50:15.913392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.493 [2024-11-07 10:50:15.913401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.494 [2024-11-07 10:50:15.913411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.494 [2024-11-07 10:50:15.913420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.494 [2024-11-07 10:50:15.913431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.494 [2024-11-07 10:50:15.913439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.494 [2024-11-07 10:50:15.913449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.494 [2024-11-07 10:50:15.913458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.494 [2024-11-07 10:50:15.913470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.494 [2024-11-07 10:50:15.913478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.494 [2024-11-07 10:50:15.913488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.494 [2024-11-07 10:50:15.913496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.494 [2024-11-07 10:50:15.913513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.494 [2024-11-07 10:50:15.913522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.494 [2024-11-07 10:50:15.913532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.494 [2024-11-07 10:50:15.913541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.494 [2024-11-07 10:50:15.913551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.494 [2024-11-07 10:50:15.913559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.494 [2024-11-07 10:50:15.913569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.494 [2024-11-07 10:50:15.913578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.494 [2024-11-07 10:50:15.913588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.494 [2024-11-07 10:50:15.913596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.494 [2024-11-07 10:50:15.913607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.494 [2024-11-07 10:50:15.913615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.494 [2024-11-07 10:50:15.913626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.494 [2024-11-07 10:50:15.913635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.494 [2024-11-07 10:50:15.913644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.494 [2024-11-07 10:50:15.913653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.494 [2024-11-07 10:50:15.913674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.494 [2024-11-07 10:50:15.913682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.494 [2024-11-07 10:50:15.913691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.494 [2024-11-07 10:50:15.913699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.494 [2024-11-07 10:50:15.913708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.494 [2024-11-07 10:50:15.913719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.494 [2024-11-07 10:50:15.913729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.494 [2024-11-07 10:50:15.913739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.494 [2024-11-07 10:50:15.913749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.494 [2024-11-07 10:50:15.913757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.494 [2024-11-07 10:50:15.913766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.494 [2024-11-07 10:50:15.913774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.494 [2024-11-07 10:50:15.913784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.494 [2024-11-07 10:50:15.913792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.494 [2024-11-07 10:50:15.913802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.494 [2024-11-07 10:50:15.913810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.494 [2024-11-07 10:50:15.913819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.494 [2024-11-07 10:50:15.913827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.494 [2024-11-07 10:50:15.913837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.494 [2024-11-07 10:50:15.913845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.494 [2024-11-07 10:50:15.913855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.494 [2024-11-07 10:50:15.913863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.494 [2024-11-07 10:50:15.913872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.494 [2024-11-07 10:50:15.913880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.494 [2024-11-07 10:50:15.913889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.494 [2024-11-07 10:50:15.913898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.494 [2024-11-07 10:50:15.913908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.494 [2024-11-07 10:50:15.913917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.494 [2024-11-07 10:50:15.913927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.494 [2024-11-07 10:50:15.913935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.494 [2024-11-07 10:50:15.913945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.494 [2024-11-07 10:50:15.913954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.494 [2024-11-07 10:50:15.913964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.494 [2024-11-07 10:50:15.913972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.494 [2024-11-07 10:50:15.913981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.494 [2024-11-07 10:50:15.913989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.494 [2024-11-07 10:50:15.913998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.494 [2024-11-07 10:50:15.914006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.494 [2024-11-07 10:50:15.914016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.494 [2024-11-07 10:50:15.914025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.494 [2024-11-07 10:50:15.914035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.494 [2024-11-07 10:50:15.914043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.494 [2024-11-07 10:50:15.914052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.494 [2024-11-07 10:50:15.914060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.495 [2024-11-07 10:50:15.914069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.495 [2024-11-07 10:50:15.914077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.495 [2024-11-07 10:50:15.914087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.495 [2024-11-07 10:50:15.914095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.495 [2024-11-07 10:50:15.914105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.495 [2024-11-07 10:50:15.914112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.495 [2024-11-07 10:50:15.914122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.495 [2024-11-07 10:50:15.914130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.495 [2024-11-07 10:50:15.914140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.495 [2024-11-07 10:50:15.914148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.495 [2024-11-07 10:50:15.914158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.495 [2024-11-07 10:50:15.914167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.495 [2024-11-07 10:50:15.914176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.495 [2024-11-07 10:50:15.914184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73376000 sqhd:7250 p:0 m:0 dnr:0 00:21:48.495 [2024-11-07 10:50:15.916174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:48.495 [2024-11-07 10:50:15.916187] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:48.495 [2024-11-07 10:50:15.916197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:8 PRP1 0x0 PRP2 0x0 00:21:48.495 [2024-11-07 10:50:15.916205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.495 [2024-11-07 10:50:15.918849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:21:48.495 [2024-11-07 10:50:15.932570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:21:48.495 [2024-11-07 10:50:15.935846] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:48.495 [2024-11-07 10:50:15.935876] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:48.495 [2024-11-07 10:50:15.935885] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168ed040 00:21:49.320 12280.25 IOPS, 47.97 MiB/s [2024-11-07T09:50:16.991Z] [2024-11-07 10:50:16.939794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:21:49.320 [2024-11-07 10:50:16.939859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:21:49.320 [2024-11-07 10:50:16.940446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:21:49.320 [2024-11-07 10:50:16.940482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:21:49.320 [2024-11-07 10:50:16.940526] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:21:49.320 [2024-11-07 10:50:16.940563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:21:49.320 [2024-11-07 10:50:16.944633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:21:49.320 [2024-11-07 10:50:16.947075] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:49.320 [2024-11-07 10:50:16.947095] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:49.320 [2024-11-07 10:50:16.947103] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168ed040 00:21:50.512 9824.20 IOPS, 38.38 MiB/s [2024-11-07T09:50:18.183Z] /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3874588 Killed "${NVMF_APP[@]}" "$@" 00:21:50.512 10:50:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:21:50.512 10:50:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:21:50.512 10:50:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:50.512 10:50:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:50.512 10:50:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:50.512 10:50:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3875918 00:21:50.512 10:50:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3875918 00:21:50.512 10:50:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:50.512 10:50:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 3875918 ']' 00:21:50.512 10:50:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:50.512 10:50:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:50.512 10:50:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:50.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:50.512 10:50:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:50.512 10:50:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:50.512 [2024-11-07 10:50:17.943977] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:21:50.512 [2024-11-07 10:50:17.944029] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:50.512 [2024-11-07 10:50:17.951592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:21:50.512 [2024-11-07 10:50:17.951655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:21:50.512 [2024-11-07 10:50:17.951899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:21:50.512 [2024-11-07 10:50:17.951911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:21:50.512 [2024-11-07 10:50:17.951920] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:21:50.512 [2024-11-07 10:50:17.951930] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:21:50.512 [2024-11-07 10:50:17.957114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:21:50.512 [2024-11-07 10:50:17.960500] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:50.512 [2024-11-07 10:50:17.960527] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:50.512 [2024-11-07 10:50:17.960552] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168ed040 00:21:50.512 [2024-11-07 10:50:18.021860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:50.512 [2024-11-07 10:50:18.061395] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:50.512 [2024-11-07 10:50:18.061431] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:50.512 [2024-11-07 10:50:18.061441] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:50.512 [2024-11-07 10:50:18.061449] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:50.512 [2024-11-07 10:50:18.061457] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:50.512 [2024-11-07 10:50:18.062881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:50.512 [2024-11-07 10:50:18.062972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:50.512 [2024-11-07 10:50:18.062974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:50.512 10:50:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:50.512 10:50:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:21:50.512 10:50:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:50.513 10:50:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:50.513 10:50:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:50.771 10:50:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:50.771 10:50:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:50.771 10:50:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.771 10:50:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:50.771 8186.83 IOPS, 31.98 MiB/s [2024-11-07T09:50:18.442Z] [2024-11-07 10:50:18.238939] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2331570/0x2335a60) succeed. 00:21:50.771 [2024-11-07 10:50:18.248138] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2332b60/0x2377100) succeed. 00:21:50.771 10:50:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.771 10:50:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:50.771 10:50:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.771 10:50:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:50.771 Malloc0 00:21:50.771 10:50:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.771 10:50:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:50.771 10:50:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.771 10:50:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:50.771 10:50:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.771 10:50:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:50.771 10:50:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.771 10:50:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:50.771 10:50:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.771 10:50:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:50.771 10:50:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.771 10:50:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:50.771 [2024-11-07 10:50:18.393191] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:50.771 10:50:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.771 10:50:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3874954 00:21:51.337 [2024-11-07 10:50:18.964627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:21:51.337 [2024-11-07 10:50:18.964657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:21:51.337 [2024-11-07 10:50:18.964830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:21:51.337 [2024-11-07 10:50:18.964842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:21:51.337 [2024-11-07 10:50:18.964855] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:21:51.337 [2024-11-07 10:50:18.964868] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:21:51.337 [2024-11-07 10:50:18.972551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:21:51.594 [2024-11-07 10:50:19.013306] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:21:52.968 7576.86 IOPS, 29.60 MiB/s [2024-11-07T09:50:21.573Z] 8906.75 IOPS, 34.79 MiB/s [2024-11-07T09:50:22.506Z] 9948.22 IOPS, 38.86 MiB/s [2024-11-07T09:50:23.440Z] 10780.90 IOPS, 42.11 MiB/s [2024-11-07T09:50:24.374Z] 11459.27 IOPS, 44.76 MiB/s [2024-11-07T09:50:25.307Z] 12026.75 IOPS, 46.98 MiB/s [2024-11-07T09:50:26.679Z] 12508.85 IOPS, 48.86 MiB/s [2024-11-07T09:50:27.612Z] 12919.93 IOPS, 50.47 MiB/s [2024-11-07T09:50:27.612Z] 13275.27 IOPS, 51.86 MiB/s 00:21:59.941 Latency(us) 00:21:59.941 [2024-11-07T09:50:27.613Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:59.942 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:59.942 Verification LBA range: start 0x0 length 0x4000 00:21:59.942 Nvme1n1 : 15.01 13277.28 51.86 10695.99 0.00 5320.88 348.98 1040187.39 00:21:59.942 [2024-11-07T09:50:27.613Z] =================================================================================================================== 00:21:59.942 [2024-11-07T09:50:27.613Z] Total : 13277.28 51.86 10695.99 0.00 5320.88 348.98 1040187.39 00:21:59.942 10:50:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:21:59.942 10:50:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:59.942 10:50:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.942 10:50:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:59.942 10:50:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.942 10:50:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:21:59.942 10:50:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:21:59.942 10:50:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:59.942 10:50:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:21:59.942 10:50:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:21:59.942 10:50:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:21:59.942 10:50:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:21:59.942 10:50:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:59.942 10:50:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:21:59.942 rmmod nvme_rdma 00:21:59.942 rmmod nvme_fabrics 00:21:59.942 10:50:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:59.942 10:50:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:21:59.942 10:50:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:21:59.942 10:50:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3875918 ']' 00:21:59.942 10:50:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3875918 00:21:59.942 10:50:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 3875918 ']' 00:21:59.942 10:50:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # kill -0 3875918 00:21:59.942 10:50:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # uname 00:21:59.942 10:50:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:59.942 10:50:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3875918 00:21:59.942 10:50:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:59.942 10:50:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:59.942 10:50:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3875918' 00:21:59.942 killing process with pid 3875918 00:21:59.942 10:50:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@971 -- # kill 3875918 00:21:59.942 10:50:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@976 -- # wait 3875918 00:22:00.201 10:50:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:00.201 10:50:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:22:00.201 00:22:00.201 real 0m25.214s 00:22:00.201 user 1m2.478s 00:22:00.201 sys 0m6.544s 00:22:00.201 10:50:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:00.201 10:50:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:00.201 ************************************ 00:22:00.201 END TEST nvmf_bdevperf 00:22:00.201 ************************************ 00:22:00.201 10:50:27 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:22:00.201 10:50:27 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:00.201 10:50:27 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:00.201 10:50:27 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.201 ************************************ 00:22:00.201 START TEST nvmf_target_disconnect 00:22:00.201 ************************************ 00:22:00.201 10:50:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:22:00.460 * Looking for test storage... 00:22:00.460 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:00.460 10:50:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:00.460 10:50:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:22:00.460 10:50:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:00.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.461 --rc genhtml_branch_coverage=1 00:22:00.461 --rc genhtml_function_coverage=1 00:22:00.461 --rc genhtml_legend=1 00:22:00.461 --rc geninfo_all_blocks=1 00:22:00.461 --rc geninfo_unexecuted_blocks=1 00:22:00.461 00:22:00.461 ' 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:00.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.461 --rc genhtml_branch_coverage=1 00:22:00.461 --rc genhtml_function_coverage=1 00:22:00.461 --rc genhtml_legend=1 00:22:00.461 --rc geninfo_all_blocks=1 00:22:00.461 --rc geninfo_unexecuted_blocks=1 00:22:00.461 00:22:00.461 ' 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:00.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.461 --rc genhtml_branch_coverage=1 00:22:00.461 --rc genhtml_function_coverage=1 00:22:00.461 --rc genhtml_legend=1 00:22:00.461 --rc geninfo_all_blocks=1 00:22:00.461 --rc geninfo_unexecuted_blocks=1 00:22:00.461 00:22:00.461 ' 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:00.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.461 --rc genhtml_branch_coverage=1 00:22:00.461 --rc genhtml_function_coverage=1 00:22:00.461 --rc genhtml_legend=1 00:22:00.461 --rc geninfo_all_blocks=1 00:22:00.461 --rc geninfo_unexecuted_blocks=1 00:22:00.461 00:22:00.461 ' 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.461 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.462 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:22:00.462 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.462 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:22:00.462 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:00.462 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:00.462 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:00.462 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:00.462 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:00.462 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:00.462 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:00.462 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:00.462 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:00.462 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:00.462 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:22:00.462 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:22:00.462 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:22:00.462 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:22:00.462 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:22:00.462 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:00.462 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:00.462 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:00.462 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:00.462 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.462 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:00.462 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.462 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:00.462 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:00.462 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:22:00.462 10:50:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:22:07.028 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:22:07.028 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:22:07.028 Found net devices under 0000:d9:00.0: mlx_0_0 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:22:07.028 Found net devices under 0000:d9:00.1: mlx_0_1 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # rdma_device_init 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # uname 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@530 -- # allocate_nic_ips 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:07.028 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:22:07.029 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:07.029 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:22:07.029 altname enp217s0f0np0 00:22:07.029 altname ens818f0np0 00:22:07.029 inet 192.168.100.8/24 scope global mlx_0_0 00:22:07.029 valid_lft forever preferred_lft forever 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:22:07.029 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:07.029 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:22:07.029 altname enp217s0f1np1 00:22:07.029 altname ens818f1np1 00:22:07.029 inet 192.168.100.9/24 scope global mlx_0_1 00:22:07.029 valid_lft forever preferred_lft forever 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:22:07.029 192.168.100.9' 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:22:07.029 192.168.100.9' 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # head -n 1 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:22:07.029 192.168.100.9' 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # tail -n +2 00:22:07.029 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # head -n 1 00:22:07.289 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:07.289 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:22:07.289 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:07.289 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:22:07.289 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:22:07.289 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:22:07.289 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:22:07.289 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:07.289 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:07.289 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:22:07.289 ************************************ 00:22:07.289 START TEST nvmf_target_disconnect_tc1 00:22:07.289 ************************************ 00:22:07.289 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc1 00:22:07.289 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:22:07.289 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:22:07.289 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:22:07.289 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:22:07.289 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:07.289 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:22:07.289 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:07.289 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:22:07.289 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:07.289 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:22:07.289 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect ]] 00:22:07.289 10:50:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:22:07.289 [2024-11-07 10:50:34.865388] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:07.289 [2024-11-07 10:50:34.865443] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:07.289 [2024-11-07 10:50:34.865455] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d7040 00:22:08.224 [2024-11-07 10:50:35.869378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] CQ transport error -6 (No such device or address) on qpair id 0 00:22:08.224 [2024-11-07 10:50:35.869411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] in failed state. 00:22:08.224 [2024-11-07 10:50:35.869423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] Ctrlr is in error state 00:22:08.224 [2024-11-07 10:50:35.869450] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:08.224 [2024-11-07 10:50:35.869462] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:22:08.224 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:22:08.224 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:22:08.224 Initializing NVMe Controllers 00:22:08.224 10:50:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:22:08.224 10:50:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:08.224 10:50:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:08.224 10:50:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:08.224 00:22:08.224 real 0m1.147s 00:22:08.224 user 0m0.882s 00:22:08.224 sys 0m0.252s 00:22:08.224 10:50:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:08.224 10:50:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:08.224 ************************************ 00:22:08.224 END TEST nvmf_target_disconnect_tc1 00:22:08.224 ************************************ 00:22:08.482 10:50:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:22:08.482 10:50:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:08.482 10:50:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:08.482 10:50:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:22:08.482 ************************************ 00:22:08.482 START TEST nvmf_target_disconnect_tc2 00:22:08.482 ************************************ 00:22:08.482 10:50:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc2 00:22:08.482 10:50:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 192.168.100.8 00:22:08.482 10:50:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:22:08.482 10:50:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:08.482 10:50:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:08.482 10:50:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:08.482 10:50:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3881018 00:22:08.482 10:50:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3881018 00:22:08.482 10:50:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:22:08.482 10:50:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3881018 ']' 00:22:08.482 10:50:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.482 10:50:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:08.482 10:50:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:08.482 10:50:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:08.482 10:50:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:08.482 [2024-11-07 10:50:35.995637] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:22:08.482 [2024-11-07 10:50:35.995693] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:08.482 [2024-11-07 10:50:36.088783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:08.482 [2024-11-07 10:50:36.129004] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:08.482 [2024-11-07 10:50:36.129047] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:08.482 [2024-11-07 10:50:36.129057] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:08.482 [2024-11-07 10:50:36.129065] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:08.482 [2024-11-07 10:50:36.129072] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:08.482 [2024-11-07 10:50:36.130779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:22:08.482 [2024-11-07 10:50:36.130892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:22:08.482 [2024-11-07 10:50:36.130999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:08.482 [2024-11-07 10:50:36.131000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:22:09.415 10:50:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:09.415 10:50:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:22:09.415 10:50:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:09.415 10:50:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:09.415 10:50:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:09.415 10:50:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:09.415 10:50:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:09.415 10:50:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.415 10:50:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:09.415 Malloc0 00:22:09.415 10:50:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.415 10:50:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:22:09.415 10:50:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.415 10:50:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:09.415 [2024-11-07 10:50:36.933338] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1bae120/0x1bb9be0) succeed. 00:22:09.415 [2024-11-07 10:50:36.942978] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1baf7b0/0x1bfb280) succeed. 00:22:09.415 10:50:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.415 10:50:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:09.415 10:50:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.415 10:50:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:09.415 10:50:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.415 10:50:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:09.415 10:50:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.415 10:50:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:09.415 10:50:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.415 10:50:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:09.415 10:50:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.415 10:50:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:09.672 [2024-11-07 10:50:37.086863] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:09.672 10:50:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.672 10:50:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:22:09.672 10:50:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.672 10:50:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:09.672 10:50:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.672 10:50:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3881191 00:22:09.672 10:50:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:22:09.672 10:50:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:22:11.567 10:50:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3881018 00:22:11.568 10:50:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:22:12.940 Read completed with error (sct=0, sc=8) 00:22:12.940 starting I/O failed 00:22:12.940 Write completed with error (sct=0, sc=8) 00:22:12.940 starting I/O failed 00:22:12.940 Read completed with error (sct=0, sc=8) 00:22:12.940 starting I/O failed 00:22:12.940 Read completed with error (sct=0, sc=8) 00:22:12.940 starting I/O failed 00:22:12.940 Read completed with error (sct=0, sc=8) 00:22:12.940 starting I/O failed 00:22:12.940 Read completed with error (sct=0, sc=8) 00:22:12.940 starting I/O failed 00:22:12.940 Read completed with error (sct=0, sc=8) 00:22:12.940 starting I/O failed 00:22:12.940 Write completed with error (sct=0, sc=8) 00:22:12.941 starting I/O failed 00:22:12.941 Read completed with error (sct=0, sc=8) 00:22:12.941 starting I/O failed 00:22:12.941 Write completed with error (sct=0, sc=8) 00:22:12.941 starting I/O failed 00:22:12.941 Write completed with error (sct=0, sc=8) 00:22:12.941 starting I/O failed 00:22:12.941 Write completed with error (sct=0, sc=8) 00:22:12.941 starting I/O failed 00:22:12.941 Write completed with error (sct=0, sc=8) 00:22:12.941 starting I/O failed 00:22:12.941 Write completed with error (sct=0, sc=8) 00:22:12.941 starting I/O failed 00:22:12.941 Read completed with error (sct=0, sc=8) 00:22:12.941 starting I/O failed 00:22:12.941 Read completed with error (sct=0, sc=8) 00:22:12.941 starting I/O failed 00:22:12.941 Read completed with error (sct=0, sc=8) 00:22:12.941 starting I/O failed 00:22:12.941 Read completed with error (sct=0, sc=8) 00:22:12.941 starting I/O failed 00:22:12.941 Read completed with error (sct=0, sc=8) 00:22:12.941 starting I/O failed 00:22:12.941 Read completed with error (sct=0, sc=8) 00:22:12.941 starting I/O failed 00:22:12.941 Read completed with error (sct=0, sc=8) 00:22:12.941 starting I/O failed 00:22:12.941 Read completed with error (sct=0, sc=8) 00:22:12.941 starting I/O failed 00:22:12.941 Read completed with error (sct=0, sc=8) 00:22:12.941 starting I/O failed 00:22:12.941 Read completed with error (sct=0, sc=8) 00:22:12.941 starting I/O failed 00:22:12.941 Read completed with error (sct=0, sc=8) 00:22:12.941 starting I/O failed 00:22:12.941 Write completed with error (sct=0, sc=8) 00:22:12.941 starting I/O failed 00:22:12.941 Write completed with error (sct=0, sc=8) 00:22:12.941 starting I/O failed 00:22:12.941 Write completed with error (sct=0, sc=8) 00:22:12.941 starting I/O failed 00:22:12.941 Write completed with error (sct=0, sc=8) 00:22:12.941 starting I/O failed 00:22:12.941 Write completed with error (sct=0, sc=8) 00:22:12.941 starting I/O failed 00:22:12.941 Write completed with error (sct=0, sc=8) 00:22:12.941 starting I/O failed 00:22:12.941 Write completed with error (sct=0, sc=8) 00:22:12.941 starting I/O failed 00:22:12.941 [2024-11-07 10:50:40.297504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:13.520 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3881018 Killed "${NVMF_APP[@]}" "$@" 00:22:13.520 10:50:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 192.168.100.8 00:22:13.520 10:50:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:22:13.520 10:50:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:13.520 10:50:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:13.520 10:50:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:13.520 10:50:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:22:13.520 10:50:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3881968 00:22:13.520 10:50:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3881968 00:22:13.520 10:50:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3881968 ']' 00:22:13.520 10:50:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:13.520 10:50:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:13.520 10:50:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:13.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:13.520 10:50:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:13.520 10:50:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:13.520 [2024-11-07 10:50:41.151599] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:22:13.520 [2024-11-07 10:50:41.151647] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:13.778 [2024-11-07 10:50:41.241333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:13.778 [2024-11-07 10:50:41.280068] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:13.778 [2024-11-07 10:50:41.280109] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:13.778 [2024-11-07 10:50:41.280119] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:13.778 [2024-11-07 10:50:41.280127] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:13.778 [2024-11-07 10:50:41.280134] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:13.778 [2024-11-07 10:50:41.281813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:22:13.778 [2024-11-07 10:50:41.281923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:22:13.778 [2024-11-07 10:50:41.282029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:13.778 [2024-11-07 10:50:41.282030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:22:13.778 Read completed with error (sct=0, sc=8) 00:22:13.778 starting I/O failed 00:22:13.778 Read completed with error (sct=0, sc=8) 00:22:13.778 starting I/O failed 00:22:13.778 Read completed with error (sct=0, sc=8) 00:22:13.778 starting I/O failed 00:22:13.778 Write completed with error (sct=0, sc=8) 00:22:13.778 starting I/O failed 00:22:13.778 Write completed with error (sct=0, sc=8) 00:22:13.778 starting I/O failed 00:22:13.778 Write completed with error (sct=0, sc=8) 00:22:13.778 starting I/O failed 00:22:13.778 Read completed with error (sct=0, sc=8) 00:22:13.778 starting I/O failed 00:22:13.778 Write completed with error (sct=0, sc=8) 00:22:13.778 starting I/O failed 00:22:13.778 Read completed with error (sct=0, sc=8) 00:22:13.778 starting I/O failed 00:22:13.778 Write completed with error (sct=0, sc=8) 00:22:13.778 starting I/O failed 00:22:13.778 Write completed with error (sct=0, sc=8) 00:22:13.778 starting I/O failed 00:22:13.778 Write completed with error (sct=0, sc=8) 00:22:13.778 starting I/O failed 00:22:13.778 Read completed with error (sct=0, sc=8) 00:22:13.778 starting I/O failed 00:22:13.778 Read completed with error (sct=0, sc=8) 00:22:13.778 starting I/O failed 00:22:13.778 Write completed with error (sct=0, sc=8) 00:22:13.778 starting I/O failed 00:22:13.778 Read completed with error (sct=0, sc=8) 00:22:13.778 starting I/O failed 00:22:13.778 Read completed with error (sct=0, sc=8) 00:22:13.778 starting I/O failed 00:22:13.778 Read completed with error (sct=0, sc=8) 00:22:13.778 starting I/O failed 00:22:13.778 Write completed with error (sct=0, sc=8) 00:22:13.778 starting I/O failed 00:22:13.778 Read completed with error (sct=0, sc=8) 00:22:13.778 starting I/O failed 00:22:13.778 Read completed with error (sct=0, sc=8) 00:22:13.778 starting I/O failed 00:22:13.778 Read completed with error (sct=0, sc=8) 00:22:13.778 starting I/O failed 00:22:13.778 Write completed with error (sct=0, sc=8) 00:22:13.778 starting I/O failed 00:22:13.778 Read completed with error (sct=0, sc=8) 00:22:13.778 starting I/O failed 00:22:13.778 Write completed with error (sct=0, sc=8) 00:22:13.778 starting I/O failed 00:22:13.778 Read completed with error (sct=0, sc=8) 00:22:13.778 starting I/O failed 00:22:13.778 Write completed with error (sct=0, sc=8) 00:22:13.778 starting I/O failed 00:22:13.778 Read completed with error (sct=0, sc=8) 00:22:13.778 starting I/O failed 00:22:13.778 Write completed with error (sct=0, sc=8) 00:22:13.778 starting I/O failed 00:22:13.778 Read completed with error (sct=0, sc=8) 00:22:13.778 starting I/O failed 00:22:13.778 Read completed with error (sct=0, sc=8) 00:22:13.778 starting I/O failed 00:22:13.778 Read completed with error (sct=0, sc=8) 00:22:13.778 starting I/O failed 00:22:13.778 [2024-11-07 10:50:41.302648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:13.778 [2024-11-07 10:50:41.304479] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:13.778 [2024-11-07 10:50:41.304515] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:13.778 [2024-11-07 10:50:41.304524] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:13.778 10:50:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:13.778 10:50:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:22:13.778 10:50:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:13.778 10:50:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:13.778 10:50:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:13.778 10:50:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:13.778 10:50:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:13.778 10:50:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.778 10:50:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:14.037 Malloc0 00:22:14.037 10:50:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.037 10:50:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:22:14.037 10:50:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.037 10:50:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:14.037 [2024-11-07 10:50:41.493246] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d50120/0x1d5bbe0) succeed. 00:22:14.037 [2024-11-07 10:50:41.502828] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d517b0/0x1d9d280) succeed. 00:22:14.037 10:50:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.037 10:50:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:14.037 10:50:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.037 10:50:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:14.037 10:50:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.037 10:50:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:14.037 10:50:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.037 10:50:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:14.037 10:50:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.037 10:50:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:14.037 10:50:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.037 10:50:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:14.037 [2024-11-07 10:50:41.643079] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:14.037 10:50:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.037 10:50:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:22:14.037 10:50:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.037 10:50:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:14.037 10:50:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.037 10:50:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3881191 00:22:14.972 [2024-11-07 10:50:42.308491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:14.972 qpair failed and we were unable to recover it. 00:22:14.972 [2024-11-07 10:50:42.312993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:14.972 [2024-11-07 10:50:42.313049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:14.972 [2024-11-07 10:50:42.313073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:14.972 [2024-11-07 10:50:42.313084] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:14.972 [2024-11-07 10:50:42.313093] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:14.972 [2024-11-07 10:50:42.322971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:14.972 qpair failed and we were unable to recover it. 00:22:14.972 [2024-11-07 10:50:42.332792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:14.972 [2024-11-07 10:50:42.332835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:14.972 [2024-11-07 10:50:42.332853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:14.972 [2024-11-07 10:50:42.332862] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:14.972 [2024-11-07 10:50:42.332872] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:14.972 [2024-11-07 10:50:42.343064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:14.972 qpair failed and we were unable to recover it. 00:22:14.972 [2024-11-07 10:50:42.353052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:14.972 [2024-11-07 10:50:42.353092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:14.972 [2024-11-07 10:50:42.353110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:14.972 [2024-11-07 10:50:42.353119] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:14.972 [2024-11-07 10:50:42.353128] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:14.972 [2024-11-07 10:50:42.363172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:14.972 qpair failed and we were unable to recover it. 00:22:14.972 [2024-11-07 10:50:42.372904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:14.972 [2024-11-07 10:50:42.372950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:14.972 [2024-11-07 10:50:42.372968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:14.972 [2024-11-07 10:50:42.372977] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:14.972 [2024-11-07 10:50:42.372986] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:14.972 [2024-11-07 10:50:42.383282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:14.972 qpair failed and we were unable to recover it. 00:22:14.972 [2024-11-07 10:50:42.393044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:14.972 [2024-11-07 10:50:42.393089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:14.972 [2024-11-07 10:50:42.393107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:14.972 [2024-11-07 10:50:42.393120] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:14.972 [2024-11-07 10:50:42.393128] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:14.972 [2024-11-07 10:50:42.403251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:14.972 qpair failed and we were unable to recover it. 00:22:14.972 [2024-11-07 10:50:42.413005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:14.972 [2024-11-07 10:50:42.413045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:14.972 [2024-11-07 10:50:42.413063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:14.972 [2024-11-07 10:50:42.413072] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:14.972 [2024-11-07 10:50:42.413080] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:14.972 [2024-11-07 10:50:42.423394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:14.972 qpair failed and we were unable to recover it. 00:22:14.972 [2024-11-07 10:50:42.433186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:14.972 [2024-11-07 10:50:42.433224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:14.972 [2024-11-07 10:50:42.433241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:14.972 [2024-11-07 10:50:42.433251] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:14.972 [2024-11-07 10:50:42.433259] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:14.972 [2024-11-07 10:50:42.443336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:14.972 qpair failed and we were unable to recover it. 00:22:14.972 [2024-11-07 10:50:42.453315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:14.972 [2024-11-07 10:50:42.453359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:14.972 [2024-11-07 10:50:42.453376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:14.972 [2024-11-07 10:50:42.453385] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:14.972 [2024-11-07 10:50:42.453394] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:14.972 [2024-11-07 10:50:42.463527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:14.972 qpair failed and we were unable to recover it. 00:22:14.972 [2024-11-07 10:50:42.473366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:14.972 [2024-11-07 10:50:42.473407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:14.972 [2024-11-07 10:50:42.473424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:14.972 [2024-11-07 10:50:42.473433] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:14.972 [2024-11-07 10:50:42.473441] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:14.972 [2024-11-07 10:50:42.483662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:14.972 qpair failed and we were unable to recover it. 00:22:14.972 [2024-11-07 10:50:42.493213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:14.972 [2024-11-07 10:50:42.493254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:14.972 [2024-11-07 10:50:42.493271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:14.972 [2024-11-07 10:50:42.493281] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:14.972 [2024-11-07 10:50:42.493289] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:14.972 [2024-11-07 10:50:42.503607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:14.972 qpair failed and we were unable to recover it. 00:22:14.972 [2024-11-07 10:50:42.513258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:14.972 [2024-11-07 10:50:42.513302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:14.972 [2024-11-07 10:50:42.513320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:14.973 [2024-11-07 10:50:42.513329] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:14.973 [2024-11-07 10:50:42.513338] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:14.973 [2024-11-07 10:50:42.523663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:14.973 qpair failed and we were unable to recover it. 00:22:14.973 [2024-11-07 10:50:42.533407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:14.973 [2024-11-07 10:50:42.533451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:14.973 [2024-11-07 10:50:42.533469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:14.973 [2024-11-07 10:50:42.533478] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:14.973 [2024-11-07 10:50:42.533486] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:14.973 [2024-11-07 10:50:42.543743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:14.973 qpair failed and we were unable to recover it. 00:22:14.973 [2024-11-07 10:50:42.553421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:14.973 [2024-11-07 10:50:42.553461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:14.973 [2024-11-07 10:50:42.553478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:14.973 [2024-11-07 10:50:42.553488] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:14.973 [2024-11-07 10:50:42.553497] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:14.973 [2024-11-07 10:50:42.563812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:14.973 qpair failed and we were unable to recover it. 00:22:14.973 [2024-11-07 10:50:42.573394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:14.973 [2024-11-07 10:50:42.573436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:14.973 [2024-11-07 10:50:42.573453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:14.973 [2024-11-07 10:50:42.573462] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:14.973 [2024-11-07 10:50:42.573471] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:14.973 [2024-11-07 10:50:42.583631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:14.973 qpair failed and we were unable to recover it. 00:22:14.973 [2024-11-07 10:50:42.593596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:14.973 [2024-11-07 10:50:42.593638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:14.973 [2024-11-07 10:50:42.593655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:14.973 [2024-11-07 10:50:42.593664] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:14.973 [2024-11-07 10:50:42.593673] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:14.973 [2024-11-07 10:50:42.603842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:14.973 qpair failed and we were unable to recover it. 00:22:14.973 [2024-11-07 10:50:42.613705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:14.973 [2024-11-07 10:50:42.613750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:14.973 [2024-11-07 10:50:42.613767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:14.973 [2024-11-07 10:50:42.613776] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:14.973 [2024-11-07 10:50:42.613785] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:14.973 [2024-11-07 10:50:42.624043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:14.973 qpair failed and we were unable to recover it. 00:22:14.973 [2024-11-07 10:50:42.633730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:14.973 [2024-11-07 10:50:42.633770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:14.973 [2024-11-07 10:50:42.633788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:14.973 [2024-11-07 10:50:42.633798] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:14.973 [2024-11-07 10:50:42.633807] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:15.233 [2024-11-07 10:50:42.644150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:15.233 qpair failed and we were unable to recover it. 00:22:15.233 [2024-11-07 10:50:42.653888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:15.233 [2024-11-07 10:50:42.653924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:15.233 [2024-11-07 10:50:42.653947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:15.233 [2024-11-07 10:50:42.653956] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:15.233 [2024-11-07 10:50:42.653965] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:15.233 [2024-11-07 10:50:42.664112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:15.233 qpair failed and we were unable to recover it. 00:22:15.233 [2024-11-07 10:50:42.673716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:15.233 [2024-11-07 10:50:42.673756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:15.233 [2024-11-07 10:50:42.673774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:15.233 [2024-11-07 10:50:42.673783] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:15.233 [2024-11-07 10:50:42.673791] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:15.233 [2024-11-07 10:50:42.684259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:15.233 qpair failed and we were unable to recover it. 00:22:15.233 [2024-11-07 10:50:42.693753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:15.233 [2024-11-07 10:50:42.693794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:15.233 [2024-11-07 10:50:42.693811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:15.233 [2024-11-07 10:50:42.693820] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:15.233 [2024-11-07 10:50:42.693829] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:15.233 [2024-11-07 10:50:42.703918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:15.233 qpair failed and we were unable to recover it. 00:22:15.233 [2024-11-07 10:50:42.713928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:15.233 [2024-11-07 10:50:42.713969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:15.233 [2024-11-07 10:50:42.713986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:15.233 [2024-11-07 10:50:42.713995] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:15.233 [2024-11-07 10:50:42.714003] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:15.233 [2024-11-07 10:50:42.724232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:15.233 qpair failed and we were unable to recover it. 00:22:15.233 [2024-11-07 10:50:42.733987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:15.233 [2024-11-07 10:50:42.734030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:15.233 [2024-11-07 10:50:42.734051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:15.233 [2024-11-07 10:50:42.734064] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:15.233 [2024-11-07 10:50:42.734072] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:15.233 [2024-11-07 10:50:42.744335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:15.233 qpair failed and we were unable to recover it. 00:22:15.233 [2024-11-07 10:50:42.754015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:15.233 [2024-11-07 10:50:42.754056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:15.233 [2024-11-07 10:50:42.754074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:15.233 [2024-11-07 10:50:42.754083] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:15.233 [2024-11-07 10:50:42.754091] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:15.233 [2024-11-07 10:50:42.764441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:15.233 qpair failed and we were unable to recover it. 00:22:15.233 [2024-11-07 10:50:42.774121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:15.233 [2024-11-07 10:50:42.774162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:15.233 [2024-11-07 10:50:42.774179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:15.233 [2024-11-07 10:50:42.774188] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:15.233 [2024-11-07 10:50:42.774197] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:15.233 [2024-11-07 10:50:42.784249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:15.233 qpair failed and we were unable to recover it. 00:22:15.233 [2024-11-07 10:50:42.794258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:15.233 [2024-11-07 10:50:42.794298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:15.233 [2024-11-07 10:50:42.794315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:15.233 [2024-11-07 10:50:42.794324] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:15.233 [2024-11-07 10:50:42.794332] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:15.233 [2024-11-07 10:50:42.804604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:15.233 qpair failed and we were unable to recover it. 00:22:15.233 [2024-11-07 10:50:42.814042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:15.233 [2024-11-07 10:50:42.814087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:15.233 [2024-11-07 10:50:42.814104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:15.233 [2024-11-07 10:50:42.814113] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:15.233 [2024-11-07 10:50:42.814122] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:15.233 [2024-11-07 10:50:42.824280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:15.233 qpair failed and we were unable to recover it. 00:22:15.233 [2024-11-07 10:50:42.834251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:15.233 [2024-11-07 10:50:42.834291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:15.233 [2024-11-07 10:50:42.834308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:15.233 [2024-11-07 10:50:42.834317] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:15.233 [2024-11-07 10:50:42.834325] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:15.233 [2024-11-07 10:50:42.844537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:15.233 qpair failed and we were unable to recover it. 00:22:15.233 [2024-11-07 10:50:42.854379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:15.233 [2024-11-07 10:50:42.854418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:15.233 [2024-11-07 10:50:42.854435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:15.233 [2024-11-07 10:50:42.854444] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:15.233 [2024-11-07 10:50:42.854453] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:15.233 [2024-11-07 10:50:42.864664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:15.233 qpair failed and we were unable to recover it. 00:22:15.233 [2024-11-07 10:50:42.874297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:15.233 [2024-11-07 10:50:42.874343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:15.233 [2024-11-07 10:50:42.874360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:15.233 [2024-11-07 10:50:42.874369] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:15.233 [2024-11-07 10:50:42.874378] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:15.234 [2024-11-07 10:50:42.884774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:15.234 qpair failed and we were unable to recover it. 00:22:15.234 [2024-11-07 10:50:42.894474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:15.234 [2024-11-07 10:50:42.894526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:15.234 [2024-11-07 10:50:42.894544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:15.234 [2024-11-07 10:50:42.894553] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:15.234 [2024-11-07 10:50:42.894562] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:15.492 [2024-11-07 10:50:42.904674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:15.492 qpair failed and we were unable to recover it. 00:22:15.492 [2024-11-07 10:50:42.914478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:15.492 [2024-11-07 10:50:42.914527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:15.492 [2024-11-07 10:50:42.914546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:15.492 [2024-11-07 10:50:42.914555] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:15.492 [2024-11-07 10:50:42.914564] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:15.492 [2024-11-07 10:50:42.924690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:15.492 qpair failed and we were unable to recover it. 00:22:15.492 [2024-11-07 10:50:42.934730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:15.492 [2024-11-07 10:50:42.934771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:15.492 [2024-11-07 10:50:42.934788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:15.492 [2024-11-07 10:50:42.934797] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:15.492 [2024-11-07 10:50:42.934805] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:15.492 [2024-11-07 10:50:42.944789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:15.492 qpair failed and we were unable to recover it. 00:22:15.492 [2024-11-07 10:50:42.954500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:15.492 [2024-11-07 10:50:42.954547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:15.492 [2024-11-07 10:50:42.954564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:15.492 [2024-11-07 10:50:42.954573] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:15.492 [2024-11-07 10:50:42.954581] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:15.492 [2024-11-07 10:50:42.965006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:15.492 qpair failed and we were unable to recover it. 00:22:15.492 [2024-11-07 10:50:42.974892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:15.492 [2024-11-07 10:50:42.974936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:15.492 [2024-11-07 10:50:42.974953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:15.492 [2024-11-07 10:50:42.974962] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:15.493 [2024-11-07 10:50:42.974970] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:15.493 [2024-11-07 10:50:42.984928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:15.493 qpair failed and we were unable to recover it. 00:22:15.493 [2024-11-07 10:50:42.994894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:15.493 [2024-11-07 10:50:42.994931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:15.493 [2024-11-07 10:50:42.994951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:15.493 [2024-11-07 10:50:42.994961] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:15.493 [2024-11-07 10:50:42.994969] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:15.493 [2024-11-07 10:50:43.005152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:15.493 qpair failed and we were unable to recover it. 00:22:15.493 [2024-11-07 10:50:43.014916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:15.493 [2024-11-07 10:50:43.014959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:15.493 [2024-11-07 10:50:43.014977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:15.493 [2024-11-07 10:50:43.014986] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:15.493 [2024-11-07 10:50:43.014994] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:15.493 [2024-11-07 10:50:43.025185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:15.493 qpair failed and we were unable to recover it. 00:22:15.493 [2024-11-07 10:50:43.035003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:15.493 [2024-11-07 10:50:43.035047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:15.493 [2024-11-07 10:50:43.035064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:15.493 [2024-11-07 10:50:43.035073] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:15.493 [2024-11-07 10:50:43.035082] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:15.493 [2024-11-07 10:50:43.045299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:15.493 qpair failed and we were unable to recover it. 00:22:15.493 [2024-11-07 10:50:43.055021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:15.493 [2024-11-07 10:50:43.055060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:15.493 [2024-11-07 10:50:43.055076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:15.493 [2024-11-07 10:50:43.055086] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:15.493 [2024-11-07 10:50:43.055094] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:15.493 [2024-11-07 10:50:43.065393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:15.493 qpair failed and we were unable to recover it. 00:22:15.493 [2024-11-07 10:50:43.075119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:15.493 [2024-11-07 10:50:43.075161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:15.493 [2024-11-07 10:50:43.075178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:15.493 [2024-11-07 10:50:43.075188] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:15.493 [2024-11-07 10:50:43.075199] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:15.493 [2024-11-07 10:50:43.085463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:15.493 qpair failed and we were unable to recover it. 00:22:15.493 [2024-11-07 10:50:43.095131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:15.493 [2024-11-07 10:50:43.095172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:15.493 [2024-11-07 10:50:43.095190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:15.493 [2024-11-07 10:50:43.095199] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:15.493 [2024-11-07 10:50:43.095208] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:15.493 [2024-11-07 10:50:43.105550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:15.493 qpair failed and we were unable to recover it. 00:22:15.493 [2024-11-07 10:50:43.115242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:15.493 [2024-11-07 10:50:43.115282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:15.493 [2024-11-07 10:50:43.115299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:15.493 [2024-11-07 10:50:43.115308] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:15.493 [2024-11-07 10:50:43.115317] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:15.493 [2024-11-07 10:50:43.125674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:15.493 qpair failed and we were unable to recover it. 00:22:15.493 [2024-11-07 10:50:43.135296] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:15.493 [2024-11-07 10:50:43.135338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:15.493 [2024-11-07 10:50:43.135355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:15.493 [2024-11-07 10:50:43.135364] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:15.493 [2024-11-07 10:50:43.135372] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:15.493 [2024-11-07 10:50:43.145538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:15.493 qpair failed and we were unable to recover it. 00:22:15.493 [2024-11-07 10:50:43.155304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:15.493 [2024-11-07 10:50:43.155342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:15.493 [2024-11-07 10:50:43.155358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:15.493 [2024-11-07 10:50:43.155368] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:15.493 [2024-11-07 10:50:43.155376] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:15.752 [2024-11-07 10:50:43.165728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:15.752 qpair failed and we were unable to recover it. 00:22:15.752 [2024-11-07 10:50:43.175401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:15.752 [2024-11-07 10:50:43.175441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:15.752 [2024-11-07 10:50:43.175459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:15.752 [2024-11-07 10:50:43.175469] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:15.752 [2024-11-07 10:50:43.175477] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:15.752 [2024-11-07 10:50:43.185747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:15.752 qpair failed and we were unable to recover it. 00:22:15.752 [2024-11-07 10:50:43.195441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:15.752 [2024-11-07 10:50:43.195479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:15.752 [2024-11-07 10:50:43.195496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:15.752 [2024-11-07 10:50:43.195505] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:15.752 [2024-11-07 10:50:43.195520] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:15.752 [2024-11-07 10:50:43.205694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:15.752 qpair failed and we were unable to recover it. 00:22:15.752 [2024-11-07 10:50:43.215587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:15.752 [2024-11-07 10:50:43.215625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:15.752 [2024-11-07 10:50:43.215642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:15.752 [2024-11-07 10:50:43.215651] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:15.752 [2024-11-07 10:50:43.215659] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:15.753 [2024-11-07 10:50:43.225608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:15.753 qpair failed and we were unable to recover it. 00:22:15.753 [2024-11-07 10:50:43.235677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:15.753 [2024-11-07 10:50:43.235717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:15.753 [2024-11-07 10:50:43.235734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:15.753 [2024-11-07 10:50:43.235743] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:15.753 [2024-11-07 10:50:43.235751] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:15.753 [2024-11-07 10:50:43.246054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:15.753 qpair failed and we were unable to recover it. 00:22:15.753 [2024-11-07 10:50:43.255662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:15.753 [2024-11-07 10:50:43.255707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:15.753 [2024-11-07 10:50:43.255724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:15.753 [2024-11-07 10:50:43.255733] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:15.753 [2024-11-07 10:50:43.255742] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:15.753 [2024-11-07 10:50:43.265955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:15.753 qpair failed and we were unable to recover it. 00:22:15.753 [2024-11-07 10:50:43.275695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:15.753 [2024-11-07 10:50:43.275738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:15.753 [2024-11-07 10:50:43.275755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:15.753 [2024-11-07 10:50:43.275764] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:15.753 [2024-11-07 10:50:43.275772] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:15.753 [2024-11-07 10:50:43.285954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:15.753 qpair failed and we were unable to recover it. 00:22:15.753 [2024-11-07 10:50:43.295799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:15.753 [2024-11-07 10:50:43.295840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:15.753 [2024-11-07 10:50:43.295857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:15.753 [2024-11-07 10:50:43.295866] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:15.753 [2024-11-07 10:50:43.295874] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:15.753 [2024-11-07 10:50:43.305884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:15.753 qpair failed and we were unable to recover it. 00:22:15.753 [2024-11-07 10:50:43.315866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:15.753 [2024-11-07 10:50:43.315909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:15.753 [2024-11-07 10:50:43.315925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:15.753 [2024-11-07 10:50:43.315934] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:15.753 [2024-11-07 10:50:43.315943] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:15.753 [2024-11-07 10:50:43.326036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:15.753 qpair failed and we were unable to recover it. 00:22:15.753 [2024-11-07 10:50:43.335980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:15.753 [2024-11-07 10:50:43.336021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:15.753 [2024-11-07 10:50:43.336041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:15.753 [2024-11-07 10:50:43.336050] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:15.753 [2024-11-07 10:50:43.336058] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:15.753 [2024-11-07 10:50:43.346088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:15.753 qpair failed and we were unable to recover it. 00:22:15.753 [2024-11-07 10:50:43.356007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:15.753 [2024-11-07 10:50:43.356053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:15.753 [2024-11-07 10:50:43.356070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:15.753 [2024-11-07 10:50:43.356080] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:15.753 [2024-11-07 10:50:43.356088] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:15.753 [2024-11-07 10:50:43.366390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:15.753 qpair failed and we were unable to recover it. 00:22:15.753 [2024-11-07 10:50:43.375992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:15.753 [2024-11-07 10:50:43.376033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:15.753 [2024-11-07 10:50:43.376050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:15.753 [2024-11-07 10:50:43.376059] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:15.753 [2024-11-07 10:50:43.376068] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:15.753 [2024-11-07 10:50:43.386175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:15.753 qpair failed and we were unable to recover it. 00:22:15.753 [2024-11-07 10:50:43.396095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:15.753 [2024-11-07 10:50:43.396132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:15.753 [2024-11-07 10:50:43.396148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:15.753 [2024-11-07 10:50:43.396157] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:15.753 [2024-11-07 10:50:43.396166] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:15.753 [2024-11-07 10:50:43.406822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:15.753 qpair failed and we were unable to recover it. 00:22:15.753 [2024-11-07 10:50:43.416104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:15.753 [2024-11-07 10:50:43.416142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:15.753 [2024-11-07 10:50:43.416159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:15.753 [2024-11-07 10:50:43.416168] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:15.753 [2024-11-07 10:50:43.416180] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.012 [2024-11-07 10:50:43.426372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.012 qpair failed and we were unable to recover it. 00:22:16.012 [2024-11-07 10:50:43.436231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.012 [2024-11-07 10:50:43.436273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.012 [2024-11-07 10:50:43.436291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.012 [2024-11-07 10:50:43.436300] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.012 [2024-11-07 10:50:43.436309] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.012 [2024-11-07 10:50:43.446588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.012 qpair failed and we were unable to recover it. 00:22:16.012 [2024-11-07 10:50:43.456238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.012 [2024-11-07 10:50:43.456276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.012 [2024-11-07 10:50:43.456293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.012 [2024-11-07 10:50:43.456302] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.012 [2024-11-07 10:50:43.456310] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.012 [2024-11-07 10:50:43.466524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.012 qpair failed and we were unable to recover it. 00:22:16.012 [2024-11-07 10:50:43.476269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.012 [2024-11-07 10:50:43.476311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.012 [2024-11-07 10:50:43.476328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.012 [2024-11-07 10:50:43.476337] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.012 [2024-11-07 10:50:43.476345] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.012 [2024-11-07 10:50:43.486608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.012 qpair failed and we were unable to recover it. 00:22:16.012 [2024-11-07 10:50:43.496262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.012 [2024-11-07 10:50:43.496304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.012 [2024-11-07 10:50:43.496321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.012 [2024-11-07 10:50:43.496330] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.012 [2024-11-07 10:50:43.496339] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.012 [2024-11-07 10:50:43.506745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.012 qpair failed and we were unable to recover it. 00:22:16.012 [2024-11-07 10:50:43.516396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.012 [2024-11-07 10:50:43.516435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.012 [2024-11-07 10:50:43.516453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.012 [2024-11-07 10:50:43.516462] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.012 [2024-11-07 10:50:43.516470] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.012 [2024-11-07 10:50:43.526755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.012 qpair failed and we were unable to recover it. 00:22:16.012 [2024-11-07 10:50:43.536380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.012 [2024-11-07 10:50:43.536421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.012 [2024-11-07 10:50:43.536438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.012 [2024-11-07 10:50:43.536448] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.012 [2024-11-07 10:50:43.536456] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.012 [2024-11-07 10:50:43.546876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.012 qpair failed and we were unable to recover it. 00:22:16.013 [2024-11-07 10:50:43.556526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.013 [2024-11-07 10:50:43.556566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.013 [2024-11-07 10:50:43.556583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.013 [2024-11-07 10:50:43.556592] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.013 [2024-11-07 10:50:43.556601] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.013 [2024-11-07 10:50:43.566724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.013 qpair failed and we were unable to recover it. 00:22:16.013 [2024-11-07 10:50:43.576525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.013 [2024-11-07 10:50:43.576567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.013 [2024-11-07 10:50:43.576584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.013 [2024-11-07 10:50:43.576593] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.013 [2024-11-07 10:50:43.576602] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.013 [2024-11-07 10:50:43.586913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.013 qpair failed and we were unable to recover it. 00:22:16.013 [2024-11-07 10:50:43.596654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.013 [2024-11-07 10:50:43.596705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.013 [2024-11-07 10:50:43.596722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.013 [2024-11-07 10:50:43.596731] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.013 [2024-11-07 10:50:43.596739] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.013 [2024-11-07 10:50:43.607046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.013 qpair failed and we were unable to recover it. 00:22:16.013 [2024-11-07 10:50:43.616614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.013 [2024-11-07 10:50:43.616652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.013 [2024-11-07 10:50:43.616669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.013 [2024-11-07 10:50:43.616679] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.013 [2024-11-07 10:50:43.616687] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.013 [2024-11-07 10:50:43.626999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.013 qpair failed and we were unable to recover it. 00:22:16.013 [2024-11-07 10:50:43.636566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.013 [2024-11-07 10:50:43.636607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.013 [2024-11-07 10:50:43.636624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.013 [2024-11-07 10:50:43.636633] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.013 [2024-11-07 10:50:43.636641] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.013 [2024-11-07 10:50:43.647122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.013 qpair failed and we were unable to recover it. 00:22:16.013 [2024-11-07 10:50:43.656953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.013 [2024-11-07 10:50:43.656994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.013 [2024-11-07 10:50:43.657012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.013 [2024-11-07 10:50:43.657021] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.013 [2024-11-07 10:50:43.657029] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.013 [2024-11-07 10:50:43.667050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.013 qpair failed and we were unable to recover it. 00:22:16.013 [2024-11-07 10:50:43.676854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.013 [2024-11-07 10:50:43.676897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.013 [2024-11-07 10:50:43.676914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.013 [2024-11-07 10:50:43.676929] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.013 [2024-11-07 10:50:43.676938] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.272 [2024-11-07 10:50:43.687339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.272 qpair failed and we were unable to recover it. 00:22:16.272 [2024-11-07 10:50:43.696933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.272 [2024-11-07 10:50:43.696975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.272 [2024-11-07 10:50:43.696992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.272 [2024-11-07 10:50:43.697002] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.272 [2024-11-07 10:50:43.697011] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.272 [2024-11-07 10:50:43.707280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.272 qpair failed and we were unable to recover it. 00:22:16.272 [2024-11-07 10:50:43.716924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.272 [2024-11-07 10:50:43.716960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.272 [2024-11-07 10:50:43.716977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.272 [2024-11-07 10:50:43.716987] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.272 [2024-11-07 10:50:43.716995] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.272 [2024-11-07 10:50:43.727325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.272 qpair failed and we were unable to recover it. 00:22:16.272 [2024-11-07 10:50:43.737086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.272 [2024-11-07 10:50:43.737128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.272 [2024-11-07 10:50:43.737145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.272 [2024-11-07 10:50:43.737155] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.272 [2024-11-07 10:50:43.737163] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.272 [2024-11-07 10:50:43.747277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.272 qpair failed and we were unable to recover it. 00:22:16.272 [2024-11-07 10:50:43.757113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.272 [2024-11-07 10:50:43.757160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.272 [2024-11-07 10:50:43.757177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.272 [2024-11-07 10:50:43.757186] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.272 [2024-11-07 10:50:43.757198] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.272 [2024-11-07 10:50:43.767393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.272 qpair failed and we were unable to recover it. 00:22:16.272 [2024-11-07 10:50:43.777154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.272 [2024-11-07 10:50:43.777195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.272 [2024-11-07 10:50:43.777213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.272 [2024-11-07 10:50:43.777222] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.272 [2024-11-07 10:50:43.777230] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.272 [2024-11-07 10:50:43.787451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.272 qpair failed and we were unable to recover it. 00:22:16.272 [2024-11-07 10:50:43.797226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.272 [2024-11-07 10:50:43.797264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.272 [2024-11-07 10:50:43.797281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.272 [2024-11-07 10:50:43.797290] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.272 [2024-11-07 10:50:43.797299] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.272 [2024-11-07 10:50:43.807525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.272 qpair failed and we were unable to recover it. 00:22:16.272 [2024-11-07 10:50:43.817138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.272 [2024-11-07 10:50:43.817179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.272 [2024-11-07 10:50:43.817197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.272 [2024-11-07 10:50:43.817206] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.273 [2024-11-07 10:50:43.817214] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.273 [2024-11-07 10:50:43.827604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.273 qpair failed and we were unable to recover it. 00:22:16.273 [2024-11-07 10:50:43.837287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.273 [2024-11-07 10:50:43.837333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.273 [2024-11-07 10:50:43.837350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.273 [2024-11-07 10:50:43.837359] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.273 [2024-11-07 10:50:43.837367] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.273 [2024-11-07 10:50:43.847708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.273 qpair failed and we were unable to recover it. 00:22:16.273 [2024-11-07 10:50:43.857348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.273 [2024-11-07 10:50:43.857386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.273 [2024-11-07 10:50:43.857403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.273 [2024-11-07 10:50:43.857412] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.273 [2024-11-07 10:50:43.857421] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.273 [2024-11-07 10:50:43.867813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.273 qpair failed and we were unable to recover it. 00:22:16.273 [2024-11-07 10:50:43.877379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.273 [2024-11-07 10:50:43.877423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.273 [2024-11-07 10:50:43.877440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.273 [2024-11-07 10:50:43.877449] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.273 [2024-11-07 10:50:43.877457] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.273 [2024-11-07 10:50:43.887813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.273 qpair failed and we were unable to recover it. 00:22:16.273 [2024-11-07 10:50:43.897519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.273 [2024-11-07 10:50:43.897561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.273 [2024-11-07 10:50:43.897578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.273 [2024-11-07 10:50:43.897588] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.273 [2024-11-07 10:50:43.897596] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.273 [2024-11-07 10:50:43.907824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.273 qpair failed and we were unable to recover it. 00:22:16.273 [2024-11-07 10:50:43.917623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.273 [2024-11-07 10:50:43.917668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.273 [2024-11-07 10:50:43.917685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.273 [2024-11-07 10:50:43.917694] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.273 [2024-11-07 10:50:43.917702] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.273 [2024-11-07 10:50:43.927917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.273 qpair failed and we were unable to recover it. 00:22:16.273 [2024-11-07 10:50:43.937610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.273 [2024-11-07 10:50:43.937653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.273 [2024-11-07 10:50:43.937674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.273 [2024-11-07 10:50:43.937683] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.273 [2024-11-07 10:50:43.937691] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.532 [2024-11-07 10:50:43.947946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.532 qpair failed and we were unable to recover it. 00:22:16.532 [2024-11-07 10:50:43.957717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.532 [2024-11-07 10:50:43.957756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.532 [2024-11-07 10:50:43.957774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.532 [2024-11-07 10:50:43.957783] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.532 [2024-11-07 10:50:43.957792] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.532 [2024-11-07 10:50:43.968111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.532 qpair failed and we were unable to recover it. 00:22:16.532 [2024-11-07 10:50:43.977873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.532 [2024-11-07 10:50:43.977916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.532 [2024-11-07 10:50:43.977933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.532 [2024-11-07 10:50:43.977942] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.532 [2024-11-07 10:50:43.977950] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.532 [2024-11-07 10:50:43.988160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.532 qpair failed and we were unable to recover it. 00:22:16.532 [2024-11-07 10:50:43.997840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.532 [2024-11-07 10:50:43.997885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.532 [2024-11-07 10:50:43.997903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.532 [2024-11-07 10:50:43.997912] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.532 [2024-11-07 10:50:43.997920] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.532 [2024-11-07 10:50:44.008182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.532 qpair failed and we were unable to recover it. 00:22:16.532 [2024-11-07 10:50:44.017855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.532 [2024-11-07 10:50:44.017898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.532 [2024-11-07 10:50:44.017914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.532 [2024-11-07 10:50:44.017927] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.532 [2024-11-07 10:50:44.017936] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.532 [2024-11-07 10:50:44.028148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.532 qpair failed and we were unable to recover it. 00:22:16.532 [2024-11-07 10:50:44.038046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.532 [2024-11-07 10:50:44.038089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.532 [2024-11-07 10:50:44.038106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.532 [2024-11-07 10:50:44.038115] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.532 [2024-11-07 10:50:44.038124] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.532 [2024-11-07 10:50:44.048637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.532 qpair failed and we were unable to recover it. 00:22:16.532 [2024-11-07 10:50:44.057943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.532 [2024-11-07 10:50:44.057985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.532 [2024-11-07 10:50:44.058002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.532 [2024-11-07 10:50:44.058011] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.532 [2024-11-07 10:50:44.058019] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.532 [2024-11-07 10:50:44.068371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.532 qpair failed and we were unable to recover it. 00:22:16.532 [2024-11-07 10:50:44.078027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.532 [2024-11-07 10:50:44.078072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.532 [2024-11-07 10:50:44.078090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.532 [2024-11-07 10:50:44.078099] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.532 [2024-11-07 10:50:44.078107] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.532 [2024-11-07 10:50:44.088297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.532 qpair failed and we were unable to recover it. 00:22:16.532 [2024-11-07 10:50:44.098195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.532 [2024-11-07 10:50:44.098236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.532 [2024-11-07 10:50:44.098253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.532 [2024-11-07 10:50:44.098262] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.532 [2024-11-07 10:50:44.098270] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.532 [2024-11-07 10:50:44.108511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.532 qpair failed and we were unable to recover it. 00:22:16.532 [2024-11-07 10:50:44.118130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.532 [2024-11-07 10:50:44.118175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.532 [2024-11-07 10:50:44.118192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.532 [2024-11-07 10:50:44.118201] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.532 [2024-11-07 10:50:44.118210] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.532 [2024-11-07 10:50:44.128398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.532 qpair failed and we were unable to recover it. 00:22:16.532 [2024-11-07 10:50:44.138150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.532 [2024-11-07 10:50:44.138193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.532 [2024-11-07 10:50:44.138210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.532 [2024-11-07 10:50:44.138219] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.532 [2024-11-07 10:50:44.138228] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.532 [2024-11-07 10:50:44.148295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.532 qpair failed and we were unable to recover it. 00:22:16.532 [2024-11-07 10:50:44.158386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.532 [2024-11-07 10:50:44.158425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.532 [2024-11-07 10:50:44.158442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.532 [2024-11-07 10:50:44.158451] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.532 [2024-11-07 10:50:44.158460] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.532 [2024-11-07 10:50:44.168463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.532 qpair failed and we were unable to recover it. 00:22:16.532 [2024-11-07 10:50:44.178255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.532 [2024-11-07 10:50:44.178296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.532 [2024-11-07 10:50:44.178313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.533 [2024-11-07 10:50:44.178322] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.533 [2024-11-07 10:50:44.178331] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.533 [2024-11-07 10:50:44.188465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.533 qpair failed and we were unable to recover it. 00:22:16.533 [2024-11-07 10:50:44.198252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.533 [2024-11-07 10:50:44.198290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.533 [2024-11-07 10:50:44.198308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.533 [2024-11-07 10:50:44.198318] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.533 [2024-11-07 10:50:44.198326] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.793 [2024-11-07 10:50:44.208616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.793 qpair failed and we were unable to recover it. 00:22:16.793 [2024-11-07 10:50:44.218433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.793 [2024-11-07 10:50:44.218476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.793 [2024-11-07 10:50:44.218494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.793 [2024-11-07 10:50:44.218503] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.793 [2024-11-07 10:50:44.218527] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.793 [2024-11-07 10:50:44.228638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.793 qpair failed and we were unable to recover it. 00:22:16.793 [2024-11-07 10:50:44.238372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.793 [2024-11-07 10:50:44.238417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.793 [2024-11-07 10:50:44.238434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.793 [2024-11-07 10:50:44.238444] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.793 [2024-11-07 10:50:44.238452] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.793 [2024-11-07 10:50:44.248597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.793 qpair failed and we were unable to recover it. 00:22:16.793 [2024-11-07 10:50:44.258443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.793 [2024-11-07 10:50:44.258481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.793 [2024-11-07 10:50:44.258498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.793 [2024-11-07 10:50:44.258512] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.793 [2024-11-07 10:50:44.258521] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.793 [2024-11-07 10:50:44.268881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.793 qpair failed and we were unable to recover it. 00:22:16.793 [2024-11-07 10:50:44.278683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.793 [2024-11-07 10:50:44.278724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.793 [2024-11-07 10:50:44.278745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.794 [2024-11-07 10:50:44.278754] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.794 [2024-11-07 10:50:44.278762] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.794 [2024-11-07 10:50:44.288864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.794 qpair failed and we were unable to recover it. 00:22:16.794 [2024-11-07 10:50:44.298598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.794 [2024-11-07 10:50:44.298643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.794 [2024-11-07 10:50:44.298660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.794 [2024-11-07 10:50:44.298669] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.794 [2024-11-07 10:50:44.298678] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.794 [2024-11-07 10:50:44.308871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.794 qpair failed and we were unable to recover it. 00:22:16.794 [2024-11-07 10:50:44.318724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.794 [2024-11-07 10:50:44.318764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.794 [2024-11-07 10:50:44.318781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.794 [2024-11-07 10:50:44.318790] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.794 [2024-11-07 10:50:44.318799] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.794 [2024-11-07 10:50:44.328958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.794 qpair failed and we were unable to recover it. 00:22:16.794 [2024-11-07 10:50:44.338695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.794 [2024-11-07 10:50:44.338735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.794 [2024-11-07 10:50:44.338752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.794 [2024-11-07 10:50:44.338761] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.794 [2024-11-07 10:50:44.338770] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.794 [2024-11-07 10:50:44.349149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.794 qpair failed and we were unable to recover it. 00:22:16.794 [2024-11-07 10:50:44.358869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.794 [2024-11-07 10:50:44.358908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.794 [2024-11-07 10:50:44.358925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.794 [2024-11-07 10:50:44.358938] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.794 [2024-11-07 10:50:44.358947] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.794 [2024-11-07 10:50:44.369218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.794 qpair failed and we were unable to recover it. 00:22:16.794 [2024-11-07 10:50:44.378889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.794 [2024-11-07 10:50:44.378931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.794 [2024-11-07 10:50:44.378948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.794 [2024-11-07 10:50:44.378957] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.794 [2024-11-07 10:50:44.378966] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.794 [2024-11-07 10:50:44.389110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.794 qpair failed and we were unable to recover it. 00:22:16.794 [2024-11-07 10:50:44.398898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.794 [2024-11-07 10:50:44.398940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.794 [2024-11-07 10:50:44.398957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.794 [2024-11-07 10:50:44.398966] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.794 [2024-11-07 10:50:44.398975] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.794 [2024-11-07 10:50:44.409275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.794 qpair failed and we were unable to recover it. 00:22:16.794 [2024-11-07 10:50:44.419004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.794 [2024-11-07 10:50:44.419045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.794 [2024-11-07 10:50:44.419063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.794 [2024-11-07 10:50:44.419072] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.794 [2024-11-07 10:50:44.419080] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.794 [2024-11-07 10:50:44.429236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.794 qpair failed and we were unable to recover it. 00:22:16.794 [2024-11-07 10:50:44.439082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.794 [2024-11-07 10:50:44.439126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.794 [2024-11-07 10:50:44.439143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.794 [2024-11-07 10:50:44.439152] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.794 [2024-11-07 10:50:44.439161] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:16.794 [2024-11-07 10:50:44.449327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.794 qpair failed and we were unable to recover it. 00:22:16.794 [2024-11-07 10:50:44.458997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:16.794 [2024-11-07 10:50:44.459037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:16.794 [2024-11-07 10:50:44.459054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:16.794 [2024-11-07 10:50:44.459064] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:16.794 [2024-11-07 10:50:44.459072] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.053 [2024-11-07 10:50:44.469370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.053 qpair failed and we were unable to recover it. 00:22:17.053 [2024-11-07 10:50:44.479123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.053 [2024-11-07 10:50:44.479164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.053 [2024-11-07 10:50:44.479181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.053 [2024-11-07 10:50:44.479190] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.053 [2024-11-07 10:50:44.479199] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.053 [2024-11-07 10:50:44.489541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.053 qpair failed and we were unable to recover it. 00:22:17.053 [2024-11-07 10:50:44.499136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.053 [2024-11-07 10:50:44.499180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.053 [2024-11-07 10:50:44.499197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.053 [2024-11-07 10:50:44.499206] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.053 [2024-11-07 10:50:44.499215] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.053 [2024-11-07 10:50:44.509557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.053 qpair failed and we were unable to recover it. 00:22:17.053 [2024-11-07 10:50:44.519230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.053 [2024-11-07 10:50:44.519274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.053 [2024-11-07 10:50:44.519292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.053 [2024-11-07 10:50:44.519301] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.053 [2024-11-07 10:50:44.519309] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.053 [2024-11-07 10:50:44.529676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.053 qpair failed and we were unable to recover it. 00:22:17.053 [2024-11-07 10:50:44.539284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.053 [2024-11-07 10:50:44.539323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.053 [2024-11-07 10:50:44.539341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.053 [2024-11-07 10:50:44.539350] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.053 [2024-11-07 10:50:44.539358] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.053 [2024-11-07 10:50:44.549715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.053 qpair failed and we were unable to recover it. 00:22:17.053 [2024-11-07 10:50:44.559472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.053 [2024-11-07 10:50:44.559520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.053 [2024-11-07 10:50:44.559537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.053 [2024-11-07 10:50:44.559546] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.053 [2024-11-07 10:50:44.559554] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.053 [2024-11-07 10:50:44.569761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.053 qpair failed and we were unable to recover it. 00:22:17.053 [2024-11-07 10:50:44.579485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.053 [2024-11-07 10:50:44.579533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.053 [2024-11-07 10:50:44.579550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.053 [2024-11-07 10:50:44.579560] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.053 [2024-11-07 10:50:44.579568] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.053 [2024-11-07 10:50:44.589777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.053 qpair failed and we were unable to recover it. 00:22:17.053 [2024-11-07 10:50:44.599547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.053 [2024-11-07 10:50:44.599587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.053 [2024-11-07 10:50:44.599604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.053 [2024-11-07 10:50:44.599613] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.053 [2024-11-07 10:50:44.599621] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.053 [2024-11-07 10:50:44.609810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.053 qpair failed and we were unable to recover it. 00:22:17.053 [2024-11-07 10:50:44.619548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.053 [2024-11-07 10:50:44.619591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.053 [2024-11-07 10:50:44.619611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.053 [2024-11-07 10:50:44.619620] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.054 [2024-11-07 10:50:44.619628] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.054 [2024-11-07 10:50:44.629838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.054 qpair failed and we were unable to recover it. 00:22:17.054 [2024-11-07 10:50:44.639587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.054 [2024-11-07 10:50:44.639630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.054 [2024-11-07 10:50:44.639646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.054 [2024-11-07 10:50:44.639655] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.054 [2024-11-07 10:50:44.639664] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.054 [2024-11-07 10:50:44.650016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.054 qpair failed and we were unable to recover it. 00:22:17.054 [2024-11-07 10:50:44.659810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.054 [2024-11-07 10:50:44.659849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.054 [2024-11-07 10:50:44.659866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.054 [2024-11-07 10:50:44.659875] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.054 [2024-11-07 10:50:44.659884] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.054 [2024-11-07 10:50:44.669856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.054 qpair failed and we were unable to recover it. 00:22:17.054 [2024-11-07 10:50:44.679698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.054 [2024-11-07 10:50:44.679743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.054 [2024-11-07 10:50:44.679760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.054 [2024-11-07 10:50:44.679769] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.054 [2024-11-07 10:50:44.679777] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.054 [2024-11-07 10:50:44.690464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.054 qpair failed and we were unable to recover it. 00:22:17.054 [2024-11-07 10:50:44.699890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.054 [2024-11-07 10:50:44.699933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.054 [2024-11-07 10:50:44.699949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.054 [2024-11-07 10:50:44.699959] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.054 [2024-11-07 10:50:44.699971] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.054 [2024-11-07 10:50:44.710103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.054 qpair failed and we were unable to recover it. 00:22:17.054 [2024-11-07 10:50:44.719846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.054 [2024-11-07 10:50:44.719882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.054 [2024-11-07 10:50:44.719901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.054 [2024-11-07 10:50:44.719911] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.054 [2024-11-07 10:50:44.719920] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.312 [2024-11-07 10:50:44.730223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.312 qpair failed and we were unable to recover it. 00:22:17.312 [2024-11-07 10:50:44.740039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.313 [2024-11-07 10:50:44.740080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.313 [2024-11-07 10:50:44.740097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.313 [2024-11-07 10:50:44.740106] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.313 [2024-11-07 10:50:44.740114] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.313 [2024-11-07 10:50:44.750320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.313 qpair failed and we were unable to recover it. 00:22:17.313 [2024-11-07 10:50:44.760045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.313 [2024-11-07 10:50:44.760085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.313 [2024-11-07 10:50:44.760103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.313 [2024-11-07 10:50:44.760112] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.313 [2024-11-07 10:50:44.760120] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.313 [2024-11-07 10:50:44.770385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.313 qpair failed and we were unable to recover it. 00:22:17.313 [2024-11-07 10:50:44.780048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.313 [2024-11-07 10:50:44.780089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.313 [2024-11-07 10:50:44.780106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.313 [2024-11-07 10:50:44.780115] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.313 [2024-11-07 10:50:44.780124] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.313 [2024-11-07 10:50:44.790382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.313 qpair failed and we were unable to recover it. 00:22:17.313 [2024-11-07 10:50:44.800180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.313 [2024-11-07 10:50:44.800220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.313 [2024-11-07 10:50:44.800237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.313 [2024-11-07 10:50:44.800246] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.313 [2024-11-07 10:50:44.800254] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.313 [2024-11-07 10:50:44.810529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.313 qpair failed and we were unable to recover it. 00:22:17.313 [2024-11-07 10:50:44.820174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.313 [2024-11-07 10:50:44.820216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.313 [2024-11-07 10:50:44.820233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.313 [2024-11-07 10:50:44.820243] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.313 [2024-11-07 10:50:44.820251] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.313 [2024-11-07 10:50:44.830543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.313 qpair failed and we were unable to recover it. 00:22:17.313 [2024-11-07 10:50:44.840247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.313 [2024-11-07 10:50:44.840286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.313 [2024-11-07 10:50:44.840303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.313 [2024-11-07 10:50:44.840312] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.313 [2024-11-07 10:50:44.840321] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.313 [2024-11-07 10:50:44.850567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.313 qpair failed and we were unable to recover it. 00:22:17.313 [2024-11-07 10:50:44.860425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.313 [2024-11-07 10:50:44.860465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.313 [2024-11-07 10:50:44.860482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.313 [2024-11-07 10:50:44.860491] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.313 [2024-11-07 10:50:44.860499] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.313 [2024-11-07 10:50:44.870649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.313 qpair failed and we were unable to recover it. 00:22:17.313 [2024-11-07 10:50:44.880407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.313 [2024-11-07 10:50:44.880452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.313 [2024-11-07 10:50:44.880468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.313 [2024-11-07 10:50:44.880478] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.313 [2024-11-07 10:50:44.880486] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.313 [2024-11-07 10:50:44.890559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.313 qpair failed and we were unable to recover it. 00:22:17.313 [2024-11-07 10:50:44.900529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.313 [2024-11-07 10:50:44.900573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.313 [2024-11-07 10:50:44.900589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.313 [2024-11-07 10:50:44.900599] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.313 [2024-11-07 10:50:44.900607] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.313 [2024-11-07 10:50:44.910541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.313 qpair failed and we were unable to recover it. 00:22:17.313 [2024-11-07 10:50:44.920397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.313 [2024-11-07 10:50:44.920438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.313 [2024-11-07 10:50:44.920455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.313 [2024-11-07 10:50:44.920465] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.313 [2024-11-07 10:50:44.920473] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.313 [2024-11-07 10:50:44.930676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.313 qpair failed and we were unable to recover it. 00:22:17.313 [2024-11-07 10:50:44.940588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.313 [2024-11-07 10:50:44.940629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.313 [2024-11-07 10:50:44.940646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.313 [2024-11-07 10:50:44.940655] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.313 [2024-11-07 10:50:44.940663] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.313 [2024-11-07 10:50:44.950735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.313 qpair failed and we were unable to recover it. 00:22:17.313 [2024-11-07 10:50:44.960655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.313 [2024-11-07 10:50:44.960697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.313 [2024-11-07 10:50:44.960718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.313 [2024-11-07 10:50:44.960727] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.313 [2024-11-07 10:50:44.960736] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.313 [2024-11-07 10:50:44.970792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.313 qpair failed and we were unable to recover it. 00:22:17.313 [2024-11-07 10:50:44.980623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.314 [2024-11-07 10:50:44.980666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.314 [2024-11-07 10:50:44.980684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.314 [2024-11-07 10:50:44.980693] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.314 [2024-11-07 10:50:44.980702] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.572 [2024-11-07 10:50:44.990990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.572 qpair failed and we were unable to recover it. 00:22:17.572 [2024-11-07 10:50:45.000757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.572 [2024-11-07 10:50:45.000799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.572 [2024-11-07 10:50:45.000816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.572 [2024-11-07 10:50:45.000826] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.572 [2024-11-07 10:50:45.000835] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.572 [2024-11-07 10:50:45.011128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.572 qpair failed and we were unable to recover it. 00:22:17.572 [2024-11-07 10:50:45.020763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.572 [2024-11-07 10:50:45.020804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.572 [2024-11-07 10:50:45.020821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.572 [2024-11-07 10:50:45.020830] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.572 [2024-11-07 10:50:45.020838] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.572 [2024-11-07 10:50:45.031082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.572 qpair failed and we were unable to recover it. 00:22:17.572 [2024-11-07 10:50:45.040863] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.572 [2024-11-07 10:50:45.040903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.572 [2024-11-07 10:50:45.040920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.572 [2024-11-07 10:50:45.040929] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.572 [2024-11-07 10:50:45.040944] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.572 [2024-11-07 10:50:45.051237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.572 qpair failed and we were unable to recover it. 00:22:17.572 [2024-11-07 10:50:45.061050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.572 [2024-11-07 10:50:45.061091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.572 [2024-11-07 10:50:45.061109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.572 [2024-11-07 10:50:45.061118] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.572 [2024-11-07 10:50:45.061127] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.572 [2024-11-07 10:50:45.071307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.572 qpair failed and we were unable to recover it. 00:22:17.572 [2024-11-07 10:50:45.081117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.572 [2024-11-07 10:50:45.081158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.572 [2024-11-07 10:50:45.081175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.572 [2024-11-07 10:50:45.081184] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.572 [2024-11-07 10:50:45.081193] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.572 [2024-11-07 10:50:45.091355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.572 qpair failed and we were unable to recover it. 00:22:17.573 [2024-11-07 10:50:45.101249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.573 [2024-11-07 10:50:45.101290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.573 [2024-11-07 10:50:45.101307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.573 [2024-11-07 10:50:45.101316] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.573 [2024-11-07 10:50:45.101324] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.573 [2024-11-07 10:50:45.111336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.573 qpair failed and we were unable to recover it. 00:22:17.573 [2024-11-07 10:50:45.121248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.573 [2024-11-07 10:50:45.121288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.573 [2024-11-07 10:50:45.121305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.573 [2024-11-07 10:50:45.121314] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.573 [2024-11-07 10:50:45.121322] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.573 [2024-11-07 10:50:45.131387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.573 qpair failed and we were unable to recover it. 00:22:17.573 [2024-11-07 10:50:45.141270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.573 [2024-11-07 10:50:45.141307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.573 [2024-11-07 10:50:45.141324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.573 [2024-11-07 10:50:45.141333] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.573 [2024-11-07 10:50:45.141341] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.573 [2024-11-07 10:50:45.151597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.573 qpair failed and we were unable to recover it. 00:22:17.573 [2024-11-07 10:50:45.161274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.573 [2024-11-07 10:50:45.161313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.573 [2024-11-07 10:50:45.161330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.573 [2024-11-07 10:50:45.161339] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.573 [2024-11-07 10:50:45.161347] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.573 [2024-11-07 10:50:45.171633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.573 qpair failed and we were unable to recover it. 00:22:17.573 [2024-11-07 10:50:45.181315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.573 [2024-11-07 10:50:45.181355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.573 [2024-11-07 10:50:45.181372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.573 [2024-11-07 10:50:45.181381] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.573 [2024-11-07 10:50:45.181390] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.573 [2024-11-07 10:50:45.191632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.573 qpair failed and we were unable to recover it. 00:22:17.573 [2024-11-07 10:50:45.201356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.573 [2024-11-07 10:50:45.201396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.573 [2024-11-07 10:50:45.201413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.573 [2024-11-07 10:50:45.201422] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.573 [2024-11-07 10:50:45.201430] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.573 [2024-11-07 10:50:45.211807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.573 qpair failed and we were unable to recover it. 00:22:17.573 [2024-11-07 10:50:45.221452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.573 [2024-11-07 10:50:45.221496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.573 [2024-11-07 10:50:45.221518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.573 [2024-11-07 10:50:45.221527] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.573 [2024-11-07 10:50:45.221535] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.573 [2024-11-07 10:50:45.231861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.573 qpair failed and we were unable to recover it. 00:22:17.573 [2024-11-07 10:50:45.241568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.573 [2024-11-07 10:50:45.241606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.573 [2024-11-07 10:50:45.241625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.573 [2024-11-07 10:50:45.241634] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.573 [2024-11-07 10:50:45.241642] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.832 [2024-11-07 10:50:45.251893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.832 qpair failed and we were unable to recover it. 00:22:17.832 [2024-11-07 10:50:45.261538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.832 [2024-11-07 10:50:45.261579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.832 [2024-11-07 10:50:45.261596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.832 [2024-11-07 10:50:45.261605] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.832 [2024-11-07 10:50:45.261614] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.832 [2024-11-07 10:50:45.271938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.832 qpair failed and we were unable to recover it. 00:22:17.832 [2024-11-07 10:50:45.281587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.832 [2024-11-07 10:50:45.281628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.832 [2024-11-07 10:50:45.281645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.832 [2024-11-07 10:50:45.281654] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.832 [2024-11-07 10:50:45.281662] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.832 [2024-11-07 10:50:45.291944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.832 qpair failed and we were unable to recover it. 00:22:17.832 [2024-11-07 10:50:45.301762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.832 [2024-11-07 10:50:45.301805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.832 [2024-11-07 10:50:45.301825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.832 [2024-11-07 10:50:45.301834] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.832 [2024-11-07 10:50:45.301843] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.832 [2024-11-07 10:50:45.312126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.832 qpair failed and we were unable to recover it. 00:22:17.833 [2024-11-07 10:50:45.321751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.833 [2024-11-07 10:50:45.321788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.833 [2024-11-07 10:50:45.321804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.833 [2024-11-07 10:50:45.321814] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.833 [2024-11-07 10:50:45.321822] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.833 [2024-11-07 10:50:45.332547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.833 qpair failed and we were unable to recover it. 00:22:17.833 [2024-11-07 10:50:45.341844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.833 [2024-11-07 10:50:45.341882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.833 [2024-11-07 10:50:45.341900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.833 [2024-11-07 10:50:45.341909] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.833 [2024-11-07 10:50:45.341917] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.833 [2024-11-07 10:50:45.352143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.833 qpair failed and we were unable to recover it. 00:22:17.833 [2024-11-07 10:50:45.361880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.833 [2024-11-07 10:50:45.361922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.833 [2024-11-07 10:50:45.361939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.833 [2024-11-07 10:50:45.361948] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.833 [2024-11-07 10:50:45.361956] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.833 [2024-11-07 10:50:45.372383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.833 qpair failed and we were unable to recover it. 00:22:17.833 [2024-11-07 10:50:45.381960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.833 [2024-11-07 10:50:45.381999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.833 [2024-11-07 10:50:45.382017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.833 [2024-11-07 10:50:45.382026] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.833 [2024-11-07 10:50:45.382038] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.833 [2024-11-07 10:50:45.392218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.833 qpair failed and we were unable to recover it. 00:22:17.833 [2024-11-07 10:50:45.401976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.833 [2024-11-07 10:50:45.402017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.833 [2024-11-07 10:50:45.402035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.833 [2024-11-07 10:50:45.402044] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.833 [2024-11-07 10:50:45.402052] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.833 [2024-11-07 10:50:45.412416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.833 qpair failed and we were unable to recover it. 00:22:17.833 [2024-11-07 10:50:45.421985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.833 [2024-11-07 10:50:45.422028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.833 [2024-11-07 10:50:45.422045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.833 [2024-11-07 10:50:45.422054] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.833 [2024-11-07 10:50:45.422062] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.833 [2024-11-07 10:50:45.432410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.833 qpair failed and we were unable to recover it. 00:22:17.833 [2024-11-07 10:50:45.442099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.833 [2024-11-07 10:50:45.442144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.833 [2024-11-07 10:50:45.442162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.833 [2024-11-07 10:50:45.442171] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.833 [2024-11-07 10:50:45.442179] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.833 [2024-11-07 10:50:45.452413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.833 qpair failed and we were unable to recover it. 00:22:17.833 [2024-11-07 10:50:45.462206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.833 [2024-11-07 10:50:45.462244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.833 [2024-11-07 10:50:45.462261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.833 [2024-11-07 10:50:45.462271] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.833 [2024-11-07 10:50:45.462279] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.833 [2024-11-07 10:50:45.472529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.833 qpair failed and we were unable to recover it. 00:22:17.833 [2024-11-07 10:50:45.482246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.833 [2024-11-07 10:50:45.482283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.833 [2024-11-07 10:50:45.482301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.833 [2024-11-07 10:50:45.482310] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.833 [2024-11-07 10:50:45.482318] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:17.833 [2024-11-07 10:50:45.492570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.833 qpair failed and we were unable to recover it. 00:22:17.833 [2024-11-07 10:50:45.502310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:17.833 [2024-11-07 10:50:45.502355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:17.833 [2024-11-07 10:50:45.502377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:17.833 [2024-11-07 10:50:45.502388] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:17.833 [2024-11-07 10:50:45.502399] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.092 [2024-11-07 10:50:45.512797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.092 qpair failed and we were unable to recover it. 00:22:18.092 [2024-11-07 10:50:45.522306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:18.092 [2024-11-07 10:50:45.522352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:18.092 [2024-11-07 10:50:45.522372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:18.092 [2024-11-07 10:50:45.522381] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:18.092 [2024-11-07 10:50:45.522390] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.092 [2024-11-07 10:50:45.532757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.092 qpair failed and we were unable to recover it. 00:22:18.092 [2024-11-07 10:50:45.542323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:18.092 [2024-11-07 10:50:45.542367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:18.092 [2024-11-07 10:50:45.542385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:18.092 [2024-11-07 10:50:45.542394] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:18.092 [2024-11-07 10:50:45.542403] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.092 [2024-11-07 10:50:45.552942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.092 qpair failed and we were unable to recover it. 00:22:18.092 [2024-11-07 10:50:45.562568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:18.092 [2024-11-07 10:50:45.562612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:18.092 [2024-11-07 10:50:45.562633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:18.092 [2024-11-07 10:50:45.562642] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:18.092 [2024-11-07 10:50:45.562651] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.092 [2024-11-07 10:50:45.572852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.092 qpair failed and we were unable to recover it. 00:22:18.092 [2024-11-07 10:50:45.582554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:18.092 [2024-11-07 10:50:45.582595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:18.092 [2024-11-07 10:50:45.582613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:18.092 [2024-11-07 10:50:45.582622] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:18.092 [2024-11-07 10:50:45.582630] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.092 [2024-11-07 10:50:45.592905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.092 qpair failed and we were unable to recover it. 00:22:18.092 [2024-11-07 10:50:45.602714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:18.092 [2024-11-07 10:50:45.602755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:18.092 [2024-11-07 10:50:45.602772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:18.092 [2024-11-07 10:50:45.602782] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:18.092 [2024-11-07 10:50:45.602790] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.092 [2024-11-07 10:50:45.612957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.092 qpair failed and we were unable to recover it. 00:22:18.092 [2024-11-07 10:50:45.622792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:18.093 [2024-11-07 10:50:45.622836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:18.093 [2024-11-07 10:50:45.622853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:18.093 [2024-11-07 10:50:45.622862] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:18.093 [2024-11-07 10:50:45.622871] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.093 [2024-11-07 10:50:45.632987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.093 qpair failed and we were unable to recover it. 00:22:18.093 [2024-11-07 10:50:45.642796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:18.093 [2024-11-07 10:50:45.642839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:18.093 [2024-11-07 10:50:45.642856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:18.093 [2024-11-07 10:50:45.642869] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:18.093 [2024-11-07 10:50:45.642877] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.093 [2024-11-07 10:50:45.653102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.093 qpair failed and we were unable to recover it. 00:22:18.093 [2024-11-07 10:50:45.662877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:18.093 [2024-11-07 10:50:45.662919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:18.093 [2024-11-07 10:50:45.662936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:18.093 [2024-11-07 10:50:45.662945] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:18.093 [2024-11-07 10:50:45.662954] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.093 [2024-11-07 10:50:45.673067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.093 qpair failed and we were unable to recover it. 00:22:18.093 [2024-11-07 10:50:45.682885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:18.093 [2024-11-07 10:50:45.682932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:18.093 [2024-11-07 10:50:45.682950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:18.093 [2024-11-07 10:50:45.682959] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:18.093 [2024-11-07 10:50:45.682968] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.093 [2024-11-07 10:50:45.693212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.093 qpair failed and we were unable to recover it. 00:22:18.093 [2024-11-07 10:50:45.703008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:18.093 [2024-11-07 10:50:45.703052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:18.093 [2024-11-07 10:50:45.703069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:18.093 [2024-11-07 10:50:45.703078] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:18.093 [2024-11-07 10:50:45.703086] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.093 [2024-11-07 10:50:45.713029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.093 qpair failed and we were unable to recover it. 00:22:18.093 [2024-11-07 10:50:45.722991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:18.093 [2024-11-07 10:50:45.723029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:18.093 [2024-11-07 10:50:45.723046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:18.093 [2024-11-07 10:50:45.723056] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:18.093 [2024-11-07 10:50:45.723064] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.093 [2024-11-07 10:50:45.733260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.093 qpair failed and we were unable to recover it. 00:22:18.093 [2024-11-07 10:50:45.743028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:18.093 [2024-11-07 10:50:45.743068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:18.093 [2024-11-07 10:50:45.743086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:18.093 [2024-11-07 10:50:45.743095] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:18.093 [2024-11-07 10:50:45.743103] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.093 [2024-11-07 10:50:45.753377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.093 qpair failed and we were unable to recover it. 00:22:18.351 [2024-11-07 10:50:45.763178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:18.351 [2024-11-07 10:50:45.763222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:18.351 [2024-11-07 10:50:45.763242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:18.351 [2024-11-07 10:50:45.763254] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:18.352 [2024-11-07 10:50:45.763265] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.352 [2024-11-07 10:50:45.773389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.352 qpair failed and we were unable to recover it. 00:22:18.352 [2024-11-07 10:50:45.783041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:18.352 [2024-11-07 10:50:45.783087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:18.352 [2024-11-07 10:50:45.783105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:18.352 [2024-11-07 10:50:45.783114] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:18.352 [2024-11-07 10:50:45.783122] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.352 [2024-11-07 10:50:45.793344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.352 qpair failed and we were unable to recover it. 00:22:18.352 [2024-11-07 10:50:45.803205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:18.352 [2024-11-07 10:50:45.803244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:18.352 [2024-11-07 10:50:45.803261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:18.352 [2024-11-07 10:50:45.803270] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:18.352 [2024-11-07 10:50:45.803278] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.352 [2024-11-07 10:50:45.813616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.352 qpair failed and we were unable to recover it. 00:22:18.352 [2024-11-07 10:50:45.823324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:18.352 [2024-11-07 10:50:45.823366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:18.352 [2024-11-07 10:50:45.823383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:18.352 [2024-11-07 10:50:45.823392] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:18.352 [2024-11-07 10:50:45.823400] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.352 [2024-11-07 10:50:45.833569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.352 qpair failed and we were unable to recover it. 00:22:18.352 [2024-11-07 10:50:45.843392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:18.352 [2024-11-07 10:50:45.843431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:18.352 [2024-11-07 10:50:45.843448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:18.352 [2024-11-07 10:50:45.843457] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:18.352 [2024-11-07 10:50:45.843465] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.352 [2024-11-07 10:50:45.853481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.352 qpair failed and we were unable to recover it. 00:22:18.352 [2024-11-07 10:50:45.863372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:18.352 [2024-11-07 10:50:45.863411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:18.352 [2024-11-07 10:50:45.863428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:18.352 [2024-11-07 10:50:45.863438] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:18.352 [2024-11-07 10:50:45.863446] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.352 [2024-11-07 10:50:45.873748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.352 qpair failed and we were unable to recover it. 00:22:18.352 [2024-11-07 10:50:45.883408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:18.352 [2024-11-07 10:50:45.883447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:18.352 [2024-11-07 10:50:45.883464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:18.352 [2024-11-07 10:50:45.883473] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:18.352 [2024-11-07 10:50:45.883482] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.352 [2024-11-07 10:50:45.893726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.352 qpair failed and we were unable to recover it. 00:22:18.352 [2024-11-07 10:50:45.903477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:18.352 [2024-11-07 10:50:45.903529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:18.352 [2024-11-07 10:50:45.903549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:18.352 [2024-11-07 10:50:45.903559] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:18.352 [2024-11-07 10:50:45.903567] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.352 [2024-11-07 10:50:45.913869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.352 qpair failed and we were unable to recover it. 00:22:18.352 [2024-11-07 10:50:45.923578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:18.352 [2024-11-07 10:50:45.923620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:18.352 [2024-11-07 10:50:45.923638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:18.352 [2024-11-07 10:50:45.923647] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:18.352 [2024-11-07 10:50:45.923655] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.352 [2024-11-07 10:50:45.933899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.352 qpair failed and we were unable to recover it. 00:22:18.352 [2024-11-07 10:50:45.943673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:18.352 [2024-11-07 10:50:45.943720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:18.352 [2024-11-07 10:50:45.943737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:18.352 [2024-11-07 10:50:45.943746] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:18.352 [2024-11-07 10:50:45.943755] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.352 [2024-11-07 10:50:45.954026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.352 qpair failed and we were unable to recover it. 00:22:18.352 [2024-11-07 10:50:45.963606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:18.352 [2024-11-07 10:50:45.963642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:18.352 [2024-11-07 10:50:45.963659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:18.352 [2024-11-07 10:50:45.963668] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:18.352 [2024-11-07 10:50:45.963676] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.352 [2024-11-07 10:50:45.974489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.352 qpair failed and we were unable to recover it. 00:22:18.352 [2024-11-07 10:50:45.983849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:18.352 [2024-11-07 10:50:45.983891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:18.352 [2024-11-07 10:50:45.983908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:18.352 [2024-11-07 10:50:45.983920] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:18.352 [2024-11-07 10:50:45.983929] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.352 [2024-11-07 10:50:45.994009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.352 qpair failed and we were unable to recover it. 00:22:18.352 [2024-11-07 10:50:46.003725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:18.352 [2024-11-07 10:50:46.003769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:18.352 [2024-11-07 10:50:46.003786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:18.352 [2024-11-07 10:50:46.003795] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:18.352 [2024-11-07 10:50:46.003804] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.352 [2024-11-07 10:50:46.014188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.352 qpair failed and we were unable to recover it. 00:22:18.611 [2024-11-07 10:50:46.023812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:18.611 [2024-11-07 10:50:46.023855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:18.611 [2024-11-07 10:50:46.023874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:18.611 [2024-11-07 10:50:46.023883] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:18.611 [2024-11-07 10:50:46.023892] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.611 [2024-11-07 10:50:46.034269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.611 qpair failed and we were unable to recover it. 00:22:18.611 [2024-11-07 10:50:46.043963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:18.611 [2024-11-07 10:50:46.044003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:18.611 [2024-11-07 10:50:46.044020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:18.611 [2024-11-07 10:50:46.044030] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:18.611 [2024-11-07 10:50:46.044038] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.612 [2024-11-07 10:50:46.054315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.612 qpair failed and we were unable to recover it. 00:22:18.612 [2024-11-07 10:50:46.064022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:18.612 [2024-11-07 10:50:46.064063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:18.612 [2024-11-07 10:50:46.064080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:18.612 [2024-11-07 10:50:46.064089] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:18.612 [2024-11-07 10:50:46.064098] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.612 [2024-11-07 10:50:46.074227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.612 qpair failed and we were unable to recover it. 00:22:18.612 [2024-11-07 10:50:46.084116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:18.612 [2024-11-07 10:50:46.084161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:18.612 [2024-11-07 10:50:46.084178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:18.612 [2024-11-07 10:50:46.084187] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:18.612 [2024-11-07 10:50:46.084195] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.612 [2024-11-07 10:50:46.094448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.612 qpair failed and we were unable to recover it. 00:22:18.612 [2024-11-07 10:50:46.104119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:18.612 [2024-11-07 10:50:46.104161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:18.612 [2024-11-07 10:50:46.104177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:18.612 [2024-11-07 10:50:46.104187] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:18.612 [2024-11-07 10:50:46.104195] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.612 [2024-11-07 10:50:46.114375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.612 qpair failed and we were unable to recover it. 00:22:18.612 [2024-11-07 10:50:46.124102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:18.612 [2024-11-07 10:50:46.124139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:18.612 [2024-11-07 10:50:46.124156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:18.612 [2024-11-07 10:50:46.124165] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:18.612 [2024-11-07 10:50:46.124174] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.612 [2024-11-07 10:50:46.134396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.612 qpair failed and we were unable to recover it. 00:22:18.612 [2024-11-07 10:50:46.144195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:18.612 [2024-11-07 10:50:46.144235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:18.612 [2024-11-07 10:50:46.144252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:18.612 [2024-11-07 10:50:46.144261] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:18.612 [2024-11-07 10:50:46.144269] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.612 [2024-11-07 10:50:46.154634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.612 qpair failed and we were unable to recover it. 00:22:18.612 [2024-11-07 10:50:46.164202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:18.612 [2024-11-07 10:50:46.164248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:18.612 [2024-11-07 10:50:46.164265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:18.612 [2024-11-07 10:50:46.164274] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:18.612 [2024-11-07 10:50:46.164283] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.612 [2024-11-07 10:50:46.174518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.612 qpair failed and we were unable to recover it. 00:22:18.612 [2024-11-07 10:50:46.184285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:18.612 [2024-11-07 10:50:46.184329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:18.612 [2024-11-07 10:50:46.184345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:18.612 [2024-11-07 10:50:46.184354] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:18.612 [2024-11-07 10:50:46.184363] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.612 [2024-11-07 10:50:46.194615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.612 qpair failed and we were unable to recover it. 00:22:18.612 [2024-11-07 10:50:46.204329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:18.612 [2024-11-07 10:50:46.204366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:18.612 [2024-11-07 10:50:46.204382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:18.612 [2024-11-07 10:50:46.204391] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:18.612 [2024-11-07 10:50:46.204400] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.612 [2024-11-07 10:50:46.214559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.612 qpair failed and we were unable to recover it. 00:22:18.612 [2024-11-07 10:50:46.224374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:18.612 [2024-11-07 10:50:46.224417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:18.612 [2024-11-07 10:50:46.224434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:18.612 [2024-11-07 10:50:46.224443] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:18.612 [2024-11-07 10:50:46.224452] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.612 [2024-11-07 10:50:46.234754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.612 qpair failed and we were unable to recover it. 00:22:18.612 [2024-11-07 10:50:46.244429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:18.612 [2024-11-07 10:50:46.244474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:18.612 [2024-11-07 10:50:46.244494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:18.612 [2024-11-07 10:50:46.244503] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:18.612 [2024-11-07 10:50:46.244526] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.612 [2024-11-07 10:50:46.254955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.612 qpair failed and we were unable to recover it. 00:22:18.612 [2024-11-07 10:50:46.264590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:18.612 [2024-11-07 10:50:46.264631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:18.612 [2024-11-07 10:50:46.264648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:18.612 [2024-11-07 10:50:46.264657] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:18.612 [2024-11-07 10:50:46.264665] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.612 [2024-11-07 10:50:46.274786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.612 qpair failed and we were unable to recover it. 00:22:18.871 [2024-11-07 10:50:46.284707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:18.871 [2024-11-07 10:50:46.284745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:18.871 [2024-11-07 10:50:46.284764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:18.871 [2024-11-07 10:50:46.284773] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:18.871 [2024-11-07 10:50:46.284782] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.871 [2024-11-07 10:50:46.294976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.871 qpair failed and we were unable to recover it. 00:22:18.871 [2024-11-07 10:50:46.304563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:18.871 [2024-11-07 10:50:46.304607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:18.871 [2024-11-07 10:50:46.304624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:18.871 [2024-11-07 10:50:46.304633] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:18.871 [2024-11-07 10:50:46.304641] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.871 [2024-11-07 10:50:46.314831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.871 qpair failed and we were unable to recover it. 00:22:18.871 [2024-11-07 10:50:46.324768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:18.871 [2024-11-07 10:50:46.324812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:18.871 [2024-11-07 10:50:46.324830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:18.871 [2024-11-07 10:50:46.324843] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:18.871 [2024-11-07 10:50:46.324852] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.871 [2024-11-07 10:50:46.335093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.871 qpair failed and we were unable to recover it. 00:22:18.871 [2024-11-07 10:50:46.344875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:18.871 [2024-11-07 10:50:46.344917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:18.871 [2024-11-07 10:50:46.344935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:18.871 [2024-11-07 10:50:46.344944] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:18.871 [2024-11-07 10:50:46.344952] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.871 [2024-11-07 10:50:46.355117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.871 qpair failed and we were unable to recover it. 00:22:18.871 [2024-11-07 10:50:46.364952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:18.872 [2024-11-07 10:50:46.364994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:18.872 [2024-11-07 10:50:46.365012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:18.872 [2024-11-07 10:50:46.365021] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:18.872 [2024-11-07 10:50:46.365029] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.872 [2024-11-07 10:50:46.375110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.872 qpair failed and we were unable to recover it. 00:22:18.872 [2024-11-07 10:50:46.384902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:18.872 [2024-11-07 10:50:46.384943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:18.872 [2024-11-07 10:50:46.384960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:18.872 [2024-11-07 10:50:46.384969] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:18.872 [2024-11-07 10:50:46.384977] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.872 [2024-11-07 10:50:46.395075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.872 qpair failed and we were unable to recover it. 00:22:18.872 [2024-11-07 10:50:46.404947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:18.872 [2024-11-07 10:50:46.404993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:18.872 [2024-11-07 10:50:46.405011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:18.872 [2024-11-07 10:50:46.405021] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:18.872 [2024-11-07 10:50:46.405029] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.872 [2024-11-07 10:50:46.415446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.872 qpair failed and we were unable to recover it. 00:22:18.872 [2024-11-07 10:50:46.425004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:18.872 [2024-11-07 10:50:46.425049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:18.872 [2024-11-07 10:50:46.425066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:18.872 [2024-11-07 10:50:46.425076] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:18.872 [2024-11-07 10:50:46.425084] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.872 [2024-11-07 10:50:46.435292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.872 qpair failed and we were unable to recover it. 00:22:18.872 [2024-11-07 10:50:46.445079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:18.872 [2024-11-07 10:50:46.445115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:18.872 [2024-11-07 10:50:46.445132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:18.872 [2024-11-07 10:50:46.445141] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:18.872 [2024-11-07 10:50:46.445150] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.872 [2024-11-07 10:50:46.455458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.872 qpair failed and we were unable to recover it. 00:22:18.872 [2024-11-07 10:50:46.465132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:18.872 [2024-11-07 10:50:46.465172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:18.872 [2024-11-07 10:50:46.465189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:18.872 [2024-11-07 10:50:46.465198] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:18.872 [2024-11-07 10:50:46.465207] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.872 [2024-11-07 10:50:46.475376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.872 qpair failed and we were unable to recover it. 00:22:18.872 [2024-11-07 10:50:46.485234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:18.872 [2024-11-07 10:50:46.485282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:18.872 [2024-11-07 10:50:46.485299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:18.872 [2024-11-07 10:50:46.485309] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:18.872 [2024-11-07 10:50:46.485318] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:22:18.872 [2024-11-07 10:50:46.495631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:18.872 qpair failed and we were unable to recover it. 00:22:18.872 [2024-11-07 10:50:46.495672] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:22:18.872 A controller has encountered a failure and is being reset. 00:22:18.872 [2024-11-07 10:50:46.495744] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:22:18.872 [2024-11-07 10:50:46.497488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:22:18.872 Controller properly reset. 00:22:20.245 Write completed with error (sct=0, sc=8) 00:22:20.245 starting I/O failed 00:22:20.245 Write completed with error (sct=0, sc=8) 00:22:20.245 starting I/O failed 00:22:20.245 Write completed with error (sct=0, sc=8) 00:22:20.245 starting I/O failed 00:22:20.245 Write completed with error (sct=0, sc=8) 00:22:20.245 starting I/O failed 00:22:20.245 Read completed with error (sct=0, sc=8) 00:22:20.245 starting I/O failed 00:22:20.245 Write completed with error (sct=0, sc=8) 00:22:20.245 starting I/O failed 00:22:20.245 Read completed with error (sct=0, sc=8) 00:22:20.245 starting I/O failed 00:22:20.245 Read completed with error (sct=0, sc=8) 00:22:20.245 starting I/O failed 00:22:20.245 Read completed with error (sct=0, sc=8) 00:22:20.245 starting I/O failed 00:22:20.245 Write completed with error (sct=0, sc=8) 00:22:20.245 starting I/O failed 00:22:20.245 Write completed with error (sct=0, sc=8) 00:22:20.245 starting I/O failed 00:22:20.245 Write completed with error (sct=0, sc=8) 00:22:20.245 starting I/O failed 00:22:20.245 Read completed with error (sct=0, sc=8) 00:22:20.245 starting I/O failed 00:22:20.245 Read completed with error (sct=0, sc=8) 00:22:20.245 starting I/O failed 00:22:20.245 Read completed with error (sct=0, sc=8) 00:22:20.245 starting I/O failed 00:22:20.245 Read completed with error (sct=0, sc=8) 00:22:20.245 starting I/O failed 00:22:20.245 Read completed with error (sct=0, sc=8) 00:22:20.245 starting I/O failed 00:22:20.245 Write completed with error (sct=0, sc=8) 00:22:20.245 starting I/O failed 00:22:20.245 Write completed with error (sct=0, sc=8) 00:22:20.245 starting I/O failed 00:22:20.245 Write completed with error (sct=0, sc=8) 00:22:20.245 starting I/O failed 00:22:20.245 Write completed with error (sct=0, sc=8) 00:22:20.245 starting I/O failed 00:22:20.245 Write completed with error (sct=0, sc=8) 00:22:20.245 starting I/O failed 00:22:20.245 Write completed with error (sct=0, sc=8) 00:22:20.245 starting I/O failed 00:22:20.245 Read completed with error (sct=0, sc=8) 00:22:20.245 starting I/O failed 00:22:20.245 Read completed with error (sct=0, sc=8) 00:22:20.245 starting I/O failed 00:22:20.245 Read completed with error (sct=0, sc=8) 00:22:20.245 starting I/O failed 00:22:20.245 Read completed with error (sct=0, sc=8) 00:22:20.245 starting I/O failed 00:22:20.245 Write completed with error (sct=0, sc=8) 00:22:20.245 starting I/O failed 00:22:20.245 Write completed with error (sct=0, sc=8) 00:22:20.245 starting I/O failed 00:22:20.245 Read completed with error (sct=0, sc=8) 00:22:20.245 starting I/O failed 00:22:20.245 Read completed with error (sct=0, sc=8) 00:22:20.245 starting I/O failed 00:22:20.245 Read completed with error (sct=0, sc=8) 00:22:20.245 starting I/O failed 00:22:20.245 [2024-11-07 10:50:47.511610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:21.179 Write completed with error (sct=0, sc=8) 00:22:21.179 starting I/O failed 00:22:21.179 Read completed with error (sct=0, sc=8) 00:22:21.179 starting I/O failed 00:22:21.179 Write completed with error (sct=0, sc=8) 00:22:21.179 starting I/O failed 00:22:21.179 Read completed with error (sct=0, sc=8) 00:22:21.179 starting I/O failed 00:22:21.179 Write completed with error (sct=0, sc=8) 00:22:21.179 starting I/O failed 00:22:21.179 Write completed with error (sct=0, sc=8) 00:22:21.179 starting I/O failed 00:22:21.179 Write completed with error (sct=0, sc=8) 00:22:21.179 starting I/O failed 00:22:21.179 Write completed with error (sct=0, sc=8) 00:22:21.179 starting I/O failed 00:22:21.179 Read completed with error (sct=0, sc=8) 00:22:21.179 starting I/O failed 00:22:21.179 Write completed with error (sct=0, sc=8) 00:22:21.179 starting I/O failed 00:22:21.179 Read completed with error (sct=0, sc=8) 00:22:21.179 starting I/O failed 00:22:21.179 Write completed with error (sct=0, sc=8) 00:22:21.179 starting I/O failed 00:22:21.179 Read completed with error (sct=0, sc=8) 00:22:21.179 starting I/O failed 00:22:21.179 Write completed with error (sct=0, sc=8) 00:22:21.179 starting I/O failed 00:22:21.179 Read completed with error (sct=0, sc=8) 00:22:21.179 starting I/O failed 00:22:21.179 Read completed with error (sct=0, sc=8) 00:22:21.179 starting I/O failed 00:22:21.179 Write completed with error (sct=0, sc=8) 00:22:21.179 starting I/O failed 00:22:21.179 Write completed with error (sct=0, sc=8) 00:22:21.179 starting I/O failed 00:22:21.179 Read completed with error (sct=0, sc=8) 00:22:21.179 starting I/O failed 00:22:21.179 Read completed with error (sct=0, sc=8) 00:22:21.179 starting I/O failed 00:22:21.179 Read completed with error (sct=0, sc=8) 00:22:21.179 starting I/O failed 00:22:21.179 Write completed with error (sct=0, sc=8) 00:22:21.179 starting I/O failed 00:22:21.179 Read completed with error (sct=0, sc=8) 00:22:21.179 starting I/O failed 00:22:21.179 Read completed with error (sct=0, sc=8) 00:22:21.179 starting I/O failed 00:22:21.179 Write completed with error (sct=0, sc=8) 00:22:21.179 starting I/O failed 00:22:21.179 Write completed with error (sct=0, sc=8) 00:22:21.179 starting I/O failed 00:22:21.179 Read completed with error (sct=0, sc=8) 00:22:21.179 starting I/O failed 00:22:21.179 Write completed with error (sct=0, sc=8) 00:22:21.179 starting I/O failed 00:22:21.179 Write completed with error (sct=0, sc=8) 00:22:21.179 starting I/O failed 00:22:21.179 Read completed with error (sct=0, sc=8) 00:22:21.179 starting I/O failed 00:22:21.179 Write completed with error (sct=0, sc=8) 00:22:21.179 starting I/O failed 00:22:21.179 Write completed with error (sct=0, sc=8) 00:22:21.179 starting I/O failed 00:22:21.179 [2024-11-07 10:50:48.534048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:21.179 Initializing NVMe Controllers 00:22:21.179 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:22:21.179 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:22:21.179 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:22:21.179 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:22:21.179 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:22:21.179 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:22:21.179 Initialization complete. Launching workers. 00:22:21.179 Starting thread on core 1 00:22:21.179 Starting thread on core 2 00:22:21.179 Starting thread on core 3 00:22:21.179 Starting thread on core 0 00:22:21.179 10:50:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:22:21.179 00:22:21.179 real 0m12.659s 00:22:21.179 user 0m26.272s 00:22:21.179 sys 0m3.167s 00:22:21.179 10:50:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:21.179 10:50:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:21.179 ************************************ 00:22:21.179 END TEST nvmf_target_disconnect_tc2 00:22:21.179 ************************************ 00:22:21.179 10:50:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n 192.168.100.9 ']' 00:22:21.179 10:50:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@73 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:22:21.179 10:50:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:21.179 10:50:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:21.179 10:50:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:22:21.179 ************************************ 00:22:21.179 START TEST nvmf_target_disconnect_tc3 00:22:21.179 ************************************ 00:22:21.179 10:50:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc3 00:22:21.179 10:50:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@57 -- # reconnectpid=3883069 00:22:21.179 10:50:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@59 -- # sleep 2 00:22:21.179 10:50:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:22:23.128 10:50:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@60 -- # kill -9 3881968 00:22:23.128 10:50:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@62 -- # sleep 2 00:22:24.524 Write completed with error (sct=0, sc=8) 00:22:24.524 starting I/O failed 00:22:24.524 Read completed with error (sct=0, sc=8) 00:22:24.524 starting I/O failed 00:22:24.524 Read completed with error (sct=0, sc=8) 00:22:24.524 starting I/O failed 00:22:24.524 Read completed with error (sct=0, sc=8) 00:22:24.524 starting I/O failed 00:22:24.524 Write completed with error (sct=0, sc=8) 00:22:24.524 starting I/O failed 00:22:24.524 Read completed with error (sct=0, sc=8) 00:22:24.524 starting I/O failed 00:22:24.524 Read completed with error (sct=0, sc=8) 00:22:24.524 starting I/O failed 00:22:24.524 Write completed with error (sct=0, sc=8) 00:22:24.524 starting I/O failed 00:22:24.524 Write completed with error (sct=0, sc=8) 00:22:24.524 starting I/O failed 00:22:24.524 Write completed with error (sct=0, sc=8) 00:22:24.524 starting I/O failed 00:22:24.524 Read completed with error (sct=0, sc=8) 00:22:24.524 starting I/O failed 00:22:24.524 Read completed with error (sct=0, sc=8) 00:22:24.524 starting I/O failed 00:22:24.525 Read completed with error (sct=0, sc=8) 00:22:24.525 starting I/O failed 00:22:24.525 Read completed with error (sct=0, sc=8) 00:22:24.525 starting I/O failed 00:22:24.525 Read completed with error (sct=0, sc=8) 00:22:24.525 starting I/O failed 00:22:24.525 Read completed with error (sct=0, sc=8) 00:22:24.525 starting I/O failed 00:22:24.525 Read completed with error (sct=0, sc=8) 00:22:24.525 starting I/O failed 00:22:24.525 Read completed with error (sct=0, sc=8) 00:22:24.525 starting I/O failed 00:22:24.525 Read completed with error (sct=0, sc=8) 00:22:24.525 starting I/O failed 00:22:24.525 Read completed with error (sct=0, sc=8) 00:22:24.525 starting I/O failed 00:22:24.525 Read completed with error (sct=0, sc=8) 00:22:24.525 starting I/O failed 00:22:24.525 Read completed with error (sct=0, sc=8) 00:22:24.525 starting I/O failed 00:22:24.525 Write completed with error (sct=0, sc=8) 00:22:24.525 starting I/O failed 00:22:24.525 Write completed with error (sct=0, sc=8) 00:22:24.525 starting I/O failed 00:22:24.525 Read completed with error (sct=0, sc=8) 00:22:24.525 starting I/O failed 00:22:24.525 Read completed with error (sct=0, sc=8) 00:22:24.525 starting I/O failed 00:22:24.525 Read completed with error (sct=0, sc=8) 00:22:24.525 starting I/O failed 00:22:24.525 Write completed with error (sct=0, sc=8) 00:22:24.525 starting I/O failed 00:22:24.525 Write completed with error (sct=0, sc=8) 00:22:24.525 starting I/O failed 00:22:24.525 Write completed with error (sct=0, sc=8) 00:22:24.525 starting I/O failed 00:22:24.525 Read completed with error (sct=0, sc=8) 00:22:24.525 starting I/O failed 00:22:24.525 Read completed with error (sct=0, sc=8) 00:22:24.525 starting I/O failed 00:22:24.525 [2024-11-07 10:50:51.847932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:22:25.090 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 54: 3881968 Killed "${NVMF_APP[@]}" "$@" 00:22:25.090 10:50:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@63 -- # disconnect_init 192.168.100.9 00:22:25.090 10:50:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:22:25.090 10:50:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:25.090 10:50:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:25.090 10:50:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:25.090 10:50:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3883835 00:22:25.090 10:50:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3883835 00:22:25.090 10:50:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:22:25.090 10:50:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@833 -- # '[' -z 3883835 ']' 00:22:25.090 10:50:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:25.090 10:50:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:25.090 10:50:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:25.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:25.090 10:50:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:25.090 10:50:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:25.090 [2024-11-07 10:50:52.717384] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:22:25.090 [2024-11-07 10:50:52.717437] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:25.348 [2024-11-07 10:50:52.808176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:25.348 [2024-11-07 10:50:52.846150] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:25.348 [2024-11-07 10:50:52.846194] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:25.348 [2024-11-07 10:50:52.846203] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:25.348 [2024-11-07 10:50:52.846211] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:25.348 [2024-11-07 10:50:52.846218] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:25.348 [2024-11-07 10:50:52.847889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:22:25.348 [2024-11-07 10:50:52.848003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:22:25.348 [2024-11-07 10:50:52.848110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:25.348 [2024-11-07 10:50:52.848112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:22:25.348 Write completed with error (sct=0, sc=8) 00:22:25.348 starting I/O failed 00:22:25.348 Write completed with error (sct=0, sc=8) 00:22:25.348 starting I/O failed 00:22:25.348 Read completed with error (sct=0, sc=8) 00:22:25.348 starting I/O failed 00:22:25.348 Read completed with error (sct=0, sc=8) 00:22:25.348 starting I/O failed 00:22:25.348 Write completed with error (sct=0, sc=8) 00:22:25.348 starting I/O failed 00:22:25.348 Read completed with error (sct=0, sc=8) 00:22:25.348 starting I/O failed 00:22:25.349 Read completed with error (sct=0, sc=8) 00:22:25.349 starting I/O failed 00:22:25.349 Read completed with error (sct=0, sc=8) 00:22:25.349 starting I/O failed 00:22:25.349 Read completed with error (sct=0, sc=8) 00:22:25.349 starting I/O failed 00:22:25.349 Read completed with error (sct=0, sc=8) 00:22:25.349 starting I/O failed 00:22:25.349 Write completed with error (sct=0, sc=8) 00:22:25.349 starting I/O failed 00:22:25.349 Read completed with error (sct=0, sc=8) 00:22:25.349 starting I/O failed 00:22:25.349 Read completed with error (sct=0, sc=8) 00:22:25.349 starting I/O failed 00:22:25.349 Read completed with error (sct=0, sc=8) 00:22:25.349 starting I/O failed 00:22:25.349 Write completed with error (sct=0, sc=8) 00:22:25.349 starting I/O failed 00:22:25.349 Read completed with error (sct=0, sc=8) 00:22:25.349 starting I/O failed 00:22:25.349 Read completed with error (sct=0, sc=8) 00:22:25.349 starting I/O failed 00:22:25.349 Write completed with error (sct=0, sc=8) 00:22:25.349 starting I/O failed 00:22:25.349 Write completed with error (sct=0, sc=8) 00:22:25.349 starting I/O failed 00:22:25.349 Read completed with error (sct=0, sc=8) 00:22:25.349 starting I/O failed 00:22:25.349 Read completed with error (sct=0, sc=8) 00:22:25.349 starting I/O failed 00:22:25.349 Write completed with error (sct=0, sc=8) 00:22:25.349 starting I/O failed 00:22:25.349 Write completed with error (sct=0, sc=8) 00:22:25.349 starting I/O failed 00:22:25.349 Write completed with error (sct=0, sc=8) 00:22:25.349 starting I/O failed 00:22:25.349 Write completed with error (sct=0, sc=8) 00:22:25.349 starting I/O failed 00:22:25.349 Read completed with error (sct=0, sc=8) 00:22:25.349 starting I/O failed 00:22:25.349 Read completed with error (sct=0, sc=8) 00:22:25.349 starting I/O failed 00:22:25.349 Read completed with error (sct=0, sc=8) 00:22:25.349 starting I/O failed 00:22:25.349 Read completed with error (sct=0, sc=8) 00:22:25.349 starting I/O failed 00:22:25.349 Write completed with error (sct=0, sc=8) 00:22:25.349 starting I/O failed 00:22:25.349 Write completed with error (sct=0, sc=8) 00:22:25.349 starting I/O failed 00:22:25.349 Write completed with error (sct=0, sc=8) 00:22:25.349 starting I/O failed 00:22:25.349 [2024-11-07 10:50:52.852998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:22:25.349 [2024-11-07 10:50:52.854701] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:25.349 [2024-11-07 10:50:52.854723] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:25.349 [2024-11-07 10:50:52.854732] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:22:25.914 10:50:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:25.914 10:50:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@866 -- # return 0 00:22:25.914 10:50:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:25.914 10:50:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:25.914 10:50:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:26.172 10:50:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:26.172 10:50:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:26.172 10:50:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.172 10:50:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:26.172 Malloc0 00:22:26.172 10:50:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.172 10:50:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:22:26.172 10:50:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.172 10:50:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:26.172 [2024-11-07 10:50:53.667491] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x149e120/0x14a9be0) succeed. 00:22:26.172 [2024-11-07 10:50:53.678141] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x149f7b0/0x14eb280) succeed. 00:22:26.172 10:50:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.172 10:50:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:26.172 10:50:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.172 10:50:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:26.172 10:50:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.172 10:50:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:26.172 10:50:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.172 10:50:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:26.172 10:50:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.172 10:50:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:22:26.172 10:50:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.172 10:50:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:26.172 [2024-11-07 10:50:53.821849] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:22:26.172 10:50:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.172 10:50:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:22:26.172 10:50:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.172 10:50:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:26.172 10:50:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.172 10:50:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@65 -- # wait 3883069 00:22:26.429 [2024-11-07 10:50:53.858703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:22:26.429 qpair failed and we were unable to recover it. 00:22:26.429 [2024-11-07 10:50:53.860411] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:26.429 [2024-11-07 10:50:53.860434] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:26.429 [2024-11-07 10:50:53.860443] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:22:27.360 [2024-11-07 10:50:54.864282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:22:27.360 qpair failed and we were unable to recover it. 00:22:27.360 [2024-11-07 10:50:54.865810] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:27.360 [2024-11-07 10:50:54.865829] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:27.360 [2024-11-07 10:50:54.865838] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:22:28.290 [2024-11-07 10:50:55.869654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:22:28.290 qpair failed and we were unable to recover it. 00:22:28.290 [2024-11-07 10:50:55.871064] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:28.290 [2024-11-07 10:50:55.871083] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:28.290 [2024-11-07 10:50:55.871091] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:22:29.221 [2024-11-07 10:50:56.874845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:22:29.221 qpair failed and we were unable to recover it. 00:22:29.221 [2024-11-07 10:50:56.876283] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:29.221 [2024-11-07 10:50:56.876302] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:29.221 [2024-11-07 10:50:56.876310] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:22:30.590 [2024-11-07 10:50:57.880084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:22:30.590 qpair failed and we were unable to recover it. 00:22:30.590 [2024-11-07 10:50:57.881487] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:30.590 [2024-11-07 10:50:57.881506] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:30.590 [2024-11-07 10:50:57.881519] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:22:31.946 [2024-11-07 10:50:58.885302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.946 qpair failed and we were unable to recover it. 00:22:31.946 [2024-11-07 10:50:58.886688] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:31.946 [2024-11-07 10:50:58.886711] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:31.946 [2024-11-07 10:50:58.886719] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:22:32.507 [2024-11-07 10:50:59.890586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:22:32.507 qpair failed and we were unable to recover it. 00:22:33.437 Read completed with error (sct=0, sc=8) 00:22:33.437 starting I/O failed 00:22:33.437 Write completed with error (sct=0, sc=8) 00:22:33.437 starting I/O failed 00:22:33.437 Read completed with error (sct=0, sc=8) 00:22:33.437 starting I/O failed 00:22:33.437 Read completed with error (sct=0, sc=8) 00:22:33.437 starting I/O failed 00:22:33.437 Read completed with error (sct=0, sc=8) 00:22:33.437 starting I/O failed 00:22:33.437 Read completed with error (sct=0, sc=8) 00:22:33.437 starting I/O failed 00:22:33.437 Write completed with error (sct=0, sc=8) 00:22:33.437 starting I/O failed 00:22:33.437 Write completed with error (sct=0, sc=8) 00:22:33.437 starting I/O failed 00:22:33.437 Write completed with error (sct=0, sc=8) 00:22:33.437 starting I/O failed 00:22:33.437 Read completed with error (sct=0, sc=8) 00:22:33.437 starting I/O failed 00:22:33.437 Write completed with error (sct=0, sc=8) 00:22:33.437 starting I/O failed 00:22:33.437 Read completed with error (sct=0, sc=8) 00:22:33.437 starting I/O failed 00:22:33.437 Write completed with error (sct=0, sc=8) 00:22:33.437 starting I/O failed 00:22:33.437 Write completed with error (sct=0, sc=8) 00:22:33.437 starting I/O failed 00:22:33.437 Write completed with error (sct=0, sc=8) 00:22:33.437 starting I/O failed 00:22:33.437 Read completed with error (sct=0, sc=8) 00:22:33.437 starting I/O failed 00:22:33.437 Read completed with error (sct=0, sc=8) 00:22:33.437 starting I/O failed 00:22:33.437 Write completed with error (sct=0, sc=8) 00:22:33.437 starting I/O failed 00:22:33.437 Read completed with error (sct=0, sc=8) 00:22:33.437 starting I/O failed 00:22:33.437 Read completed with error (sct=0, sc=8) 00:22:33.437 starting I/O failed 00:22:33.437 Read completed with error (sct=0, sc=8) 00:22:33.437 starting I/O failed 00:22:33.437 Write completed with error (sct=0, sc=8) 00:22:33.437 starting I/O failed 00:22:33.437 Read completed with error (sct=0, sc=8) 00:22:33.437 starting I/O failed 00:22:33.437 Read completed with error (sct=0, sc=8) 00:22:33.437 starting I/O failed 00:22:33.437 Read completed with error (sct=0, sc=8) 00:22:33.437 starting I/O failed 00:22:33.437 Write completed with error (sct=0, sc=8) 00:22:33.437 starting I/O failed 00:22:33.437 Read completed with error (sct=0, sc=8) 00:22:33.437 starting I/O failed 00:22:33.437 Write completed with error (sct=0, sc=8) 00:22:33.437 starting I/O failed 00:22:33.437 Read completed with error (sct=0, sc=8) 00:22:33.437 starting I/O failed 00:22:33.437 Write completed with error (sct=0, sc=8) 00:22:33.437 starting I/O failed 00:22:33.437 Read completed with error (sct=0, sc=8) 00:22:33.437 starting I/O failed 00:22:33.437 Write completed with error (sct=0, sc=8) 00:22:33.437 starting I/O failed 00:22:33.437 [2024-11-07 10:51:00.895700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.368 Write completed with error (sct=0, sc=8) 00:22:34.368 starting I/O failed 00:22:34.368 Write completed with error (sct=0, sc=8) 00:22:34.368 starting I/O failed 00:22:34.368 Write completed with error (sct=0, sc=8) 00:22:34.368 starting I/O failed 00:22:34.368 Read completed with error (sct=0, sc=8) 00:22:34.368 starting I/O failed 00:22:34.368 Read completed with error (sct=0, sc=8) 00:22:34.368 starting I/O failed 00:22:34.368 Read completed with error (sct=0, sc=8) 00:22:34.368 starting I/O failed 00:22:34.368 Write completed with error (sct=0, sc=8) 00:22:34.368 starting I/O failed 00:22:34.368 Read completed with error (sct=0, sc=8) 00:22:34.368 starting I/O failed 00:22:34.368 Write completed with error (sct=0, sc=8) 00:22:34.368 starting I/O failed 00:22:34.368 Read completed with error (sct=0, sc=8) 00:22:34.368 starting I/O failed 00:22:34.368 Read completed with error (sct=0, sc=8) 00:22:34.368 starting I/O failed 00:22:34.368 Write completed with error (sct=0, sc=8) 00:22:34.368 starting I/O failed 00:22:34.368 Read completed with error (sct=0, sc=8) 00:22:34.368 starting I/O failed 00:22:34.368 Write completed with error (sct=0, sc=8) 00:22:34.368 starting I/O failed 00:22:34.368 Write completed with error (sct=0, sc=8) 00:22:34.368 starting I/O failed 00:22:34.368 Read completed with error (sct=0, sc=8) 00:22:34.368 starting I/O failed 00:22:34.368 Write completed with error (sct=0, sc=8) 00:22:34.368 starting I/O failed 00:22:34.368 Write completed with error (sct=0, sc=8) 00:22:34.368 starting I/O failed 00:22:34.368 Write completed with error (sct=0, sc=8) 00:22:34.368 starting I/O failed 00:22:34.368 Write completed with error (sct=0, sc=8) 00:22:34.368 starting I/O failed 00:22:34.368 Write completed with error (sct=0, sc=8) 00:22:34.368 starting I/O failed 00:22:34.368 Write completed with error (sct=0, sc=8) 00:22:34.368 starting I/O failed 00:22:34.368 Read completed with error (sct=0, sc=8) 00:22:34.368 starting I/O failed 00:22:34.368 Read completed with error (sct=0, sc=8) 00:22:34.368 starting I/O failed 00:22:34.368 Read completed with error (sct=0, sc=8) 00:22:34.368 starting I/O failed 00:22:34.368 Write completed with error (sct=0, sc=8) 00:22:34.368 starting I/O failed 00:22:34.368 Read completed with error (sct=0, sc=8) 00:22:34.368 starting I/O failed 00:22:34.368 Read completed with error (sct=0, sc=8) 00:22:34.368 starting I/O failed 00:22:34.368 Read completed with error (sct=0, sc=8) 00:22:34.368 starting I/O failed 00:22:34.368 Write completed with error (sct=0, sc=8) 00:22:34.368 starting I/O failed 00:22:34.368 Read completed with error (sct=0, sc=8) 00:22:34.368 starting I/O failed 00:22:34.368 Write completed with error (sct=0, sc=8) 00:22:34.368 starting I/O failed 00:22:34.368 [2024-11-07 10:51:01.900885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 4 00:22:34.368 [2024-11-07 10:51:01.900937] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Submitting Keep Alive failed 00:22:34.369 A controller has encountered a failure and is being reset. 00:22:34.369 Resorting to new failover address 192.168.100.9 00:22:34.369 [2024-11-07 10:51:01.901042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:22:34.369 [2024-11-07 10:51:01.901120] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:22:34.369 [2024-11-07 10:51:01.933505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:22:34.369 Controller properly reset. 00:22:34.369 Initializing NVMe Controllers 00:22:34.369 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:22:34.369 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:22:34.369 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:22:34.369 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:22:34.369 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:22:34.369 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:22:34.369 Initialization complete. Launching workers. 00:22:34.369 Starting thread on core 1 00:22:34.369 Starting thread on core 2 00:22:34.369 Starting thread on core 3 00:22:34.369 Starting thread on core 0 00:22:34.369 10:51:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@66 -- # sync 00:22:34.369 00:22:34.369 real 0m13.370s 00:22:34.369 user 0m54.560s 00:22:34.369 sys 0m4.041s 00:22:34.369 10:51:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:34.369 10:51:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:34.369 ************************************ 00:22:34.369 END TEST nvmf_target_disconnect_tc3 00:22:34.369 ************************************ 00:22:34.626 10:51:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:22:34.626 10:51:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:22:34.626 10:51:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:34.626 10:51:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:22:34.626 10:51:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:22:34.626 10:51:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:22:34.626 10:51:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:22:34.626 10:51:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:34.626 10:51:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:22:34.626 rmmod nvme_rdma 00:22:34.626 rmmod nvme_fabrics 00:22:34.626 10:51:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:34.626 10:51:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:22:34.626 10:51:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:22:34.626 10:51:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3883835 ']' 00:22:34.626 10:51:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3883835 00:22:34.626 10:51:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' -z 3883835 ']' 00:22:34.626 10:51:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # kill -0 3883835 00:22:34.626 10:51:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # uname 00:22:34.626 10:51:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:34.626 10:51:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3883835 00:22:34.626 10:51:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_4 00:22:34.626 10:51:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_4 = sudo ']' 00:22:34.626 10:51:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3883835' 00:22:34.626 killing process with pid 3883835 00:22:34.626 10:51:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@971 -- # kill 3883835 00:22:34.626 10:51:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@976 -- # wait 3883835 00:22:34.884 10:51:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:34.884 10:51:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:22:34.884 00:22:34.884 real 0m34.590s 00:22:34.884 user 2m1.720s 00:22:34.884 sys 0m12.966s 00:22:34.884 10:51:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:34.884 10:51:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:22:34.884 ************************************ 00:22:34.884 END TEST nvmf_target_disconnect 00:22:34.884 ************************************ 00:22:34.884 10:51:02 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:22:34.884 00:22:34.884 real 5m17.491s 00:22:34.884 user 12m21.835s 00:22:34.884 sys 1m38.201s 00:22:34.884 10:51:02 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:34.884 10:51:02 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.884 ************************************ 00:22:34.884 END TEST nvmf_host 00:22:34.884 ************************************ 00:22:34.884 10:51:02 nvmf_rdma -- nvmf/nvmf.sh@19 -- # [[ rdma = \t\c\p ]] 00:22:34.884 00:22:34.884 real 17m9.520s 00:22:34.884 user 41m23.642s 00:22:34.884 sys 5m29.765s 00:22:34.884 10:51:02 nvmf_rdma -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:34.884 10:51:02 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:34.884 ************************************ 00:22:34.884 END TEST nvmf_rdma 00:22:34.884 ************************************ 00:22:35.143 10:51:02 -- spdk/autotest.sh@278 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:22:35.143 10:51:02 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:35.143 10:51:02 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:35.143 10:51:02 -- common/autotest_common.sh@10 -- # set +x 00:22:35.143 ************************************ 00:22:35.143 START TEST spdkcli_nvmf_rdma 00:22:35.143 ************************************ 00:22:35.143 10:51:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:22:35.143 * Looking for test storage... 00:22:35.143 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:22:35.143 10:51:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:35.143 10:51:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@1691 -- # lcov --version 00:22:35.143 10:51:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:35.143 10:51:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:35.143 10:51:02 spdkcli_nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:35.143 10:51:02 spdkcli_nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:35.143 10:51:02 spdkcli_nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:35.143 10:51:02 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:22:35.143 10:51:02 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:22:35.143 10:51:02 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:22:35.143 10:51:02 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:22:35.143 10:51:02 spdkcli_nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:22:35.143 10:51:02 spdkcli_nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:22:35.143 10:51:02 spdkcli_nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:22:35.143 10:51:02 spdkcli_nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:35.143 10:51:02 spdkcli_nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:22:35.143 10:51:02 spdkcli_nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:22:35.143 10:51:02 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:35.143 10:51:02 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:35.143 10:51:02 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:22:35.143 10:51:02 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:22:35.143 10:51:02 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:35.143 10:51:02 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:22:35.143 10:51:02 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:22:35.143 10:51:02 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:22:35.143 10:51:02 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:22:35.143 10:51:02 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:35.143 10:51:02 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:22:35.143 10:51:02 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:22:35.143 10:51:02 spdkcli_nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:35.143 10:51:02 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:35.143 10:51:02 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:22:35.143 10:51:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:35.143 10:51:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:35.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.143 --rc genhtml_branch_coverage=1 00:22:35.143 --rc genhtml_function_coverage=1 00:22:35.143 --rc genhtml_legend=1 00:22:35.143 --rc geninfo_all_blocks=1 00:22:35.143 --rc geninfo_unexecuted_blocks=1 00:22:35.143 00:22:35.143 ' 00:22:35.143 10:51:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:35.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.143 --rc genhtml_branch_coverage=1 00:22:35.143 --rc genhtml_function_coverage=1 00:22:35.143 --rc genhtml_legend=1 00:22:35.143 --rc geninfo_all_blocks=1 00:22:35.143 --rc geninfo_unexecuted_blocks=1 00:22:35.143 00:22:35.143 ' 00:22:35.143 10:51:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:35.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.143 --rc genhtml_branch_coverage=1 00:22:35.143 --rc genhtml_function_coverage=1 00:22:35.143 --rc genhtml_legend=1 00:22:35.143 --rc geninfo_all_blocks=1 00:22:35.143 --rc geninfo_unexecuted_blocks=1 00:22:35.143 00:22:35.143 ' 00:22:35.143 10:51:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:35.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.143 --rc genhtml_branch_coverage=1 00:22:35.143 --rc genhtml_function_coverage=1 00:22:35.143 --rc genhtml_legend=1 00:22:35.143 --rc geninfo_all_blocks=1 00:22:35.143 --rc geninfo_unexecuted_blocks=1 00:22:35.143 00:22:35.143 ' 00:22:35.143 10:51:02 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:22:35.143 10:51:02 spdkcli_nvmf_rdma -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:22:35.143 10:51:02 spdkcli_nvmf_rdma -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:22:35.143 10:51:02 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:35.143 10:51:02 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:22:35.143 10:51:02 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:35.143 10:51:02 spdkcli_nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- scripts/common.sh@15 -- # shopt -s extglob 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- nvmf/common.sh@51 -- # : 0 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:35.144 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3885638 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- spdkcli/common.sh@34 -- # waitforlisten 3885638 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@833 -- # '[' -z 3885638 ']' 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:35.144 10:51:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:35.402 [2024-11-07 10:51:02.850050] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:22:35.402 [2024-11-07 10:51:02.850106] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3885638 ] 00:22:35.403 [2024-11-07 10:51:02.924712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:35.403 [2024-11-07 10:51:02.966109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:35.403 [2024-11-07 10:51:02.966112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.403 10:51:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:35.403 10:51:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@866 -- # return 0 00:22:35.403 10:51:03 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:22:35.403 10:51:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:35.403 10:51:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:35.660 10:51:03 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:22:35.660 10:51:03 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:22:35.660 10:51:03 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:22:35.660 10:51:03 spdkcli_nvmf_rdma -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:22:35.660 10:51:03 spdkcli_nvmf_rdma -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:35.660 10:51:03 spdkcli_nvmf_rdma -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:35.660 10:51:03 spdkcli_nvmf_rdma -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:35.660 10:51:03 spdkcli_nvmf_rdma -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:35.660 10:51:03 spdkcli_nvmf_rdma -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.660 10:51:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:35.661 10:51:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.661 10:51:03 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:35.661 10:51:03 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:35.661 10:51:03 spdkcli_nvmf_rdma -- nvmf/common.sh@309 -- # xtrace_disable 00:22:35.661 10:51:03 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # pci_devs=() 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # net_devs=() 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # e810=() 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # local -ga e810 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # x722=() 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # local -ga x722 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # mlx=() 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # local -ga mlx 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:22:42.236 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:22:42.236 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:22:42.236 Found net devices under 0000:d9:00.0: mlx_0_0 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:22:42.236 Found net devices under 0000:d9:00.1: mlx_0_1 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # is_hw=yes 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@448 -- # rdma_device_init 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # uname 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@67 -- # modprobe ib_core 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@530 -- # allocate_nic_ips 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:22:42.236 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:22:42.493 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:42.493 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:22:42.493 altname enp217s0f0np0 00:22:42.493 altname ens818f0np0 00:22:42.493 inet 192.168.100.8/24 scope global mlx_0_0 00:22:42.493 valid_lft forever preferred_lft forever 00:22:42.493 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:42.493 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:22:42.493 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:42.493 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:42.493 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:42.493 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:42.493 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:22:42.493 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:22:42.493 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:22:42.493 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:42.493 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:22:42.493 altname enp217s0f1np1 00:22:42.493 altname ens818f1np1 00:22:42.493 inet 192.168.100.9/24 scope global mlx_0_1 00:22:42.493 valid_lft forever preferred_lft forever 00:22:42.493 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@450 -- # return 0 00:22:42.493 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:42.493 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:42.493 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:22:42.493 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:22:42.493 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:22:42.493 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:42.493 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:42.493 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:42.493 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:42.493 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:42.493 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:42.493 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:42.493 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:42.493 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:42.493 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:22:42.493 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:42.493 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:42.493 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:42.493 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:42.493 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:42.493 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:42.493 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:22:42.493 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:42.493 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:22:42.493 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:42.493 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:42.493 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:42.493 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:42.493 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:42.493 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:22:42.493 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:42.493 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:42.493 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:42.493 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:42.493 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:22:42.493 192.168.100.9' 00:22:42.493 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:22:42.493 192.168.100.9' 00:22:42.493 10:51:09 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # head -n 1 00:22:42.493 10:51:10 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:42.493 10:51:10 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:22:42.493 192.168.100.9' 00:22:42.493 10:51:10 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # tail -n +2 00:22:42.493 10:51:10 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # head -n 1 00:22:42.493 10:51:10 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:42.493 10:51:10 spdkcli_nvmf_rdma -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:22:42.493 10:51:10 spdkcli_nvmf_rdma -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:42.493 10:51:10 spdkcli_nvmf_rdma -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:22:42.493 10:51:10 spdkcli_nvmf_rdma -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:22:42.493 10:51:10 spdkcli_nvmf_rdma -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:22:42.493 10:51:10 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:22:42.493 10:51:10 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:22:42.493 10:51:10 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:42.493 10:51:10 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:42.493 10:51:10 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:22:42.493 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:22:42.493 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:22:42.493 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:22:42.493 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:22:42.493 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:22:42.493 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:22:42.493 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:22:42.493 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:22:42.493 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:22:42.493 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:22:42.493 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:42.493 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:22:42.493 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:22:42.493 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:42.493 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:22:42.493 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:22:42.493 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:22:42.493 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:22:42.493 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:42.494 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:22:42.494 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:22:42.494 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:22:42.494 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:22:42.494 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:42.494 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:22:42.494 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:22:42.494 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:22:42.494 ' 00:22:45.019 [2024-11-07 10:51:12.521995] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1c69f80/0x1b5b700) succeed. 00:22:45.019 [2024-11-07 10:51:12.531481] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1c6b660/0x1bdb740) succeed. 00:22:46.389 [2024-11-07 10:51:13.805340] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:22:48.914 [2024-11-07 10:51:16.100530] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:22:50.812 [2024-11-07 10:51:18.083085] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:22:52.183 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:22:52.183 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:22:52.183 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:22:52.183 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:22:52.183 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:22:52.183 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:22:52.183 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:22:52.183 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:22:52.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:22:52.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:22:52.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:22:52.183 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:52.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:22:52.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:22:52.183 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:52.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:22:52.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:22:52.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:22:52.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:22:52.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:52.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:22:52.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:22:52.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:22:52.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:22:52.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:52.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:22:52.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:22:52.183 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:22:52.183 10:51:19 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:22:52.183 10:51:19 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:52.183 10:51:19 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:52.183 10:51:19 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:22:52.183 10:51:19 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:52.183 10:51:19 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:52.183 10:51:19 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@69 -- # check_match 00:22:52.183 10:51:19 spdkcli_nvmf_rdma -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:22:52.748 10:51:20 spdkcli_nvmf_rdma -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:22:52.748 10:51:20 spdkcli_nvmf_rdma -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:22:52.748 10:51:20 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:22:52.748 10:51:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:52.748 10:51:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:52.748 10:51:20 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:22:52.748 10:51:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:52.748 10:51:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:52.748 10:51:20 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:22:52.748 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:22:52.748 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:22:52.748 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:22:52.748 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:22:52.748 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:22:52.748 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:22:52.748 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:22:52.748 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:22:52.748 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:22:52.748 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:22:52.748 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:22:52.748 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:22:52.748 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:22:52.748 ' 00:22:58.038 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:22:58.038 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:22:58.038 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:22:58.038 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:22:58.038 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:22:58.038 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:22:58.038 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:22:58.038 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:22:58.038 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:22:58.038 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:22:58.038 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:22:58.038 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:22:58.038 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:22:58.038 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:22:58.038 10:51:25 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:22:58.038 10:51:25 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:58.038 10:51:25 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:58.038 10:51:25 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@90 -- # killprocess 3885638 00:22:58.038 10:51:25 spdkcli_nvmf_rdma -- common/autotest_common.sh@952 -- # '[' -z 3885638 ']' 00:22:58.038 10:51:25 spdkcli_nvmf_rdma -- common/autotest_common.sh@956 -- # kill -0 3885638 00:22:58.038 10:51:25 spdkcli_nvmf_rdma -- common/autotest_common.sh@957 -- # uname 00:22:58.038 10:51:25 spdkcli_nvmf_rdma -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:58.038 10:51:25 spdkcli_nvmf_rdma -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3885638 00:22:58.038 10:51:25 spdkcli_nvmf_rdma -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:58.038 10:51:25 spdkcli_nvmf_rdma -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:58.038 10:51:25 spdkcli_nvmf_rdma -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3885638' 00:22:58.038 killing process with pid 3885638 00:22:58.038 10:51:25 spdkcli_nvmf_rdma -- common/autotest_common.sh@971 -- # kill 3885638 00:22:58.038 10:51:25 spdkcli_nvmf_rdma -- common/autotest_common.sh@976 -- # wait 3885638 00:22:58.038 10:51:25 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:22:58.038 10:51:25 spdkcli_nvmf_rdma -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:58.038 10:51:25 spdkcli_nvmf_rdma -- nvmf/common.sh@121 -- # sync 00:22:58.038 10:51:25 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:22:58.038 10:51:25 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:22:58.038 10:51:25 spdkcli_nvmf_rdma -- nvmf/common.sh@124 -- # set +e 00:22:58.038 10:51:25 spdkcli_nvmf_rdma -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:58.038 10:51:25 spdkcli_nvmf_rdma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:22:58.038 rmmod nvme_rdma 00:22:58.038 rmmod nvme_fabrics 00:22:58.296 10:51:25 spdkcli_nvmf_rdma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:58.296 10:51:25 spdkcli_nvmf_rdma -- nvmf/common.sh@128 -- # set -e 00:22:58.296 10:51:25 spdkcli_nvmf_rdma -- nvmf/common.sh@129 -- # return 0 00:22:58.296 10:51:25 spdkcli_nvmf_rdma -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:22:58.296 10:51:25 spdkcli_nvmf_rdma -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:58.296 10:51:25 spdkcli_nvmf_rdma -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:22:58.296 00:22:58.296 real 0m23.163s 00:22:58.296 user 0m49.478s 00:22:58.296 sys 0m6.036s 00:22:58.296 10:51:25 spdkcli_nvmf_rdma -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:58.296 10:51:25 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:58.296 ************************************ 00:22:58.296 END TEST spdkcli_nvmf_rdma 00:22:58.296 ************************************ 00:22:58.296 10:51:25 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:22:58.296 10:51:25 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:22:58.296 10:51:25 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:22:58.296 10:51:25 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:22:58.296 10:51:25 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:22:58.296 10:51:25 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:22:58.296 10:51:25 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:22:58.296 10:51:25 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:22:58.296 10:51:25 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:22:58.296 10:51:25 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:22:58.296 10:51:25 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:22:58.296 10:51:25 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:22:58.296 10:51:25 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:22:58.296 10:51:25 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:22:58.296 10:51:25 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:22:58.296 10:51:25 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:22:58.296 10:51:25 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:22:58.297 10:51:25 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:58.297 10:51:25 -- common/autotest_common.sh@10 -- # set +x 00:22:58.297 10:51:25 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:22:58.297 10:51:25 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:22:58.297 10:51:25 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:22:58.297 10:51:25 -- common/autotest_common.sh@10 -- # set +x 00:23:04.859 INFO: APP EXITING 00:23:04.859 INFO: killing all VMs 00:23:04.859 INFO: killing vhost app 00:23:04.859 INFO: EXIT DONE 00:23:07.391 Waiting for block devices as requested 00:23:07.391 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:23:07.391 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:23:07.648 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:23:07.648 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:23:07.648 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:23:07.648 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:23:07.907 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:23:07.907 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:23:07.907 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:23:08.165 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:23:08.165 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:23:08.165 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:23:08.443 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:23:08.444 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:23:08.444 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:23:08.444 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:23:08.781 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:23:12.061 Cleaning 00:23:12.061 Removing: /var/run/dpdk/spdk0/config 00:23:12.061 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:23:12.061 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:23:12.061 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:23:12.061 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:23:12.061 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:23:12.061 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:23:12.061 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:23:12.061 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:23:12.061 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:23:12.061 Removing: /var/run/dpdk/spdk0/hugepage_info 00:23:12.061 Removing: /var/run/dpdk/spdk1/config 00:23:12.061 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:23:12.061 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:23:12.061 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:23:12.061 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:23:12.061 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:23:12.061 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:23:12.061 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:23:12.319 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:23:12.319 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:23:12.319 Removing: /var/run/dpdk/spdk1/hugepage_info 00:23:12.319 Removing: /var/run/dpdk/spdk1/mp_socket 00:23:12.319 Removing: /var/run/dpdk/spdk2/config 00:23:12.319 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:23:12.319 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:23:12.319 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:23:12.319 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:23:12.319 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:23:12.319 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:23:12.319 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:23:12.319 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:23:12.319 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:23:12.319 Removing: /var/run/dpdk/spdk2/hugepage_info 00:23:12.319 Removing: /var/run/dpdk/spdk3/config 00:23:12.319 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:23:12.319 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:23:12.319 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:23:12.319 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:23:12.319 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:23:12.319 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:23:12.319 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:23:12.319 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:23:12.319 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:23:12.319 Removing: /var/run/dpdk/spdk3/hugepage_info 00:23:12.319 Removing: /var/run/dpdk/spdk4/config 00:23:12.319 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:23:12.319 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:23:12.319 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:23:12.319 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:23:12.319 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:23:12.319 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:23:12.319 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:23:12.319 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:23:12.319 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:23:12.319 Removing: /var/run/dpdk/spdk4/hugepage_info 00:23:12.319 Removing: /dev/shm/bdevperf_trace.pid3639280 00:23:12.319 Removing: /dev/shm/bdev_svc_trace.1 00:23:12.319 Removing: /dev/shm/nvmf_trace.0 00:23:12.319 Removing: /dev/shm/spdk_tgt_trace.pid3595562 00:23:12.319 Removing: /var/run/dpdk/spdk0 00:23:12.319 Removing: /var/run/dpdk/spdk1 00:23:12.319 Removing: /var/run/dpdk/spdk2 00:23:12.319 Removing: /var/run/dpdk/spdk3 00:23:12.319 Removing: /var/run/dpdk/spdk4 00:23:12.319 Removing: /var/run/dpdk/spdk_pid3592816 00:23:12.319 Removing: /var/run/dpdk/spdk_pid3594089 00:23:12.319 Removing: /var/run/dpdk/spdk_pid3595562 00:23:12.319 Removing: /var/run/dpdk/spdk_pid3596027 00:23:12.319 Removing: /var/run/dpdk/spdk_pid3597110 00:23:12.319 Removing: /var/run/dpdk/spdk_pid3597135 00:23:12.319 Removing: /var/run/dpdk/spdk_pid3598249 00:23:12.319 Removing: /var/run/dpdk/spdk_pid3598255 00:23:12.319 Removing: /var/run/dpdk/spdk_pid3598645 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3603740 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3605345 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3606058 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3606412 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3606834 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3607346 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3607493 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3607774 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3608081 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3608853 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3611745 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3612023 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3612310 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3612314 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3612869 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3612899 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3613428 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3613492 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3613728 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3613914 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3614015 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3614181 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3614633 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3614912 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3615238 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3619101 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3623308 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3633726 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3634509 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3639280 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3639622 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3643631 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3649369 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3652545 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3662489 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3686684 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3690285 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3732739 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3737945 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3743452 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3752364 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3791869 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3793032 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3794296 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3795486 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3799953 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3806156 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3812988 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3813774 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3814800 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3815589 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3816100 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3820557 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3820559 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3825037 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3825555 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3826074 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3826851 00:23:12.576 Removing: /var/run/dpdk/spdk_pid3826856 00:23:12.834 Removing: /var/run/dpdk/spdk_pid3831607 00:23:12.834 Removing: /var/run/dpdk/spdk_pid3832235 00:23:12.834 Removing: /var/run/dpdk/spdk_pid3836542 00:23:12.834 Removing: /var/run/dpdk/spdk_pid3839143 00:23:12.834 Removing: /var/run/dpdk/spdk_pid3845000 00:23:12.834 Removing: /var/run/dpdk/spdk_pid3854919 00:23:12.834 Removing: /var/run/dpdk/spdk_pid3854927 00:23:12.834 Removing: /var/run/dpdk/spdk_pid3874721 00:23:12.834 Removing: /var/run/dpdk/spdk_pid3874954 00:23:12.834 Removing: /var/run/dpdk/spdk_pid3880868 00:23:12.834 Removing: /var/run/dpdk/spdk_pid3881191 00:23:12.834 Removing: /var/run/dpdk/spdk_pid3883069 00:23:12.834 Removing: /var/run/dpdk/spdk_pid3885638 00:23:12.834 Clean 00:23:12.834 10:51:40 -- common/autotest_common.sh@1451 -- # return 0 00:23:12.834 10:51:40 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:23:12.834 10:51:40 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:12.834 10:51:40 -- common/autotest_common.sh@10 -- # set +x 00:23:12.834 10:51:40 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:23:12.834 10:51:40 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:12.834 10:51:40 -- common/autotest_common.sh@10 -- # set +x 00:23:12.834 10:51:40 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:23:12.834 10:51:40 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:23:12.834 10:51:40 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:23:12.834 10:51:40 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:23:12.834 10:51:40 -- spdk/autotest.sh@394 -- # hostname 00:23:12.834 10:51:40 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-21 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:23:13.092 geninfo: WARNING: invalid characters removed from testname! 00:23:31.156 10:51:58 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:23:33.063 10:52:00 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:23:34.961 10:52:02 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:23:36.343 10:52:03 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:23:38.245 10:52:05 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:23:39.621 10:52:07 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:23:41.521 10:52:08 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:23:41.521 10:52:08 -- spdk/autorun.sh@1 -- $ timing_finish 00:23:41.522 10:52:08 -- common/autotest_common.sh@736 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt ]] 00:23:41.522 10:52:08 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:23:41.522 10:52:08 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:23:41.522 10:52:08 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:23:41.522 + [[ -n 3513937 ]] 00:23:41.522 + sudo kill 3513937 00:23:41.531 [Pipeline] } 00:23:41.547 [Pipeline] // stage 00:23:41.553 [Pipeline] } 00:23:41.568 [Pipeline] // timeout 00:23:41.574 [Pipeline] } 00:23:41.589 [Pipeline] // catchError 00:23:41.595 [Pipeline] } 00:23:41.610 [Pipeline] // wrap 00:23:41.617 [Pipeline] } 00:23:41.631 [Pipeline] // catchError 00:23:41.641 [Pipeline] stage 00:23:41.643 [Pipeline] { (Epilogue) 00:23:41.658 [Pipeline] catchError 00:23:41.660 [Pipeline] { 00:23:41.674 [Pipeline] echo 00:23:41.676 Cleanup processes 00:23:41.684 [Pipeline] sh 00:23:41.967 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:23:41.967 3898821 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:23:41.984 [Pipeline] sh 00:23:42.265 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:23:42.265 ++ grep -v 'sudo pgrep' 00:23:42.265 ++ awk '{print $1}' 00:23:42.265 + sudo kill -9 00:23:42.265 + true 00:23:42.276 [Pipeline] sh 00:23:42.555 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:23:42.555 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:23:46.743 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:23:50.940 [Pipeline] sh 00:23:51.225 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:23:51.225 Artifacts sizes are good 00:23:51.240 [Pipeline] archiveArtifacts 00:23:51.248 Archiving artifacts 00:23:51.390 [Pipeline] sh 00:23:51.735 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-phy-autotest 00:23:51.749 [Pipeline] cleanWs 00:23:51.760 [WS-CLEANUP] Deleting project workspace... 00:23:51.760 [WS-CLEANUP] Deferred wipeout is used... 00:23:51.767 [WS-CLEANUP] done 00:23:51.769 [Pipeline] } 00:23:51.789 [Pipeline] // catchError 00:23:51.804 [Pipeline] sh 00:23:52.083 + logger -p user.info -t JENKINS-CI 00:23:52.088 [Pipeline] } 00:23:52.099 [Pipeline] // stage 00:23:52.104 [Pipeline] } 00:23:52.117 [Pipeline] // node 00:23:52.123 [Pipeline] End of Pipeline 00:23:52.155 Finished: SUCCESS